Tumgik
#Electrical engineering and computer science (EECS)
jcmarchi · 2 months
Text
New method enables fast, accurate estimates of cardiovascular state to inform blood pressure management
New Post has been published on https://thedigitalinsider.com/new-method-enables-fast-accurate-estimates-of-cardiovascular-state-to-inform-blood-pressure-management/
New method enables fast, accurate estimates of cardiovascular state to inform blood pressure management
Tumblr media Tumblr media
If patients receiving intensive care or undergoing major surgery develop excessively high or low blood pressures, they could suffer severe organ dysfunction. It’s not enough for their care team to know that pressure is abnormal. To choose the correct drug to treat the problem, doctors must know why blood pressure has changed. A new MIT study presents the mathematical framework needed to derive that crucial information accurately and in real time.
The mathematical approach, described in a recent open-access study in IEEE Transactions on Biomedical Engineering, produces proportional estimates of the two critical factors underlying blood pressure changes: the heart’s rate of blood output (cardiac output) and the arterial system’s resistance to that blood flow (systemic vascular resistance). By applying the new method to previously collected data from animal models, the researchers show that their estimates, derived from minimally invasive measures of peripheral arterial blood pressure, accurately matched estimates using additional information from an invasive flow probe placed on the aorta. Moreover, the estimates accurately tracked the changes induced in the animals by the various drugs physicians use to correct aberrant blood pressure.
“Estimates of resistance and cardiac output from our approach provide information that can readily be used to guide hemodynamic management decisions in real time,” the study authors wrote.
With further testing leading to regulatory approval, the authors say, the method would be applicable during heart surgeries, liver transplants, intensive care unit treatment, and many other procedures affecting cardiovascular function or blood volume.
“Any patient who is having cardiac surgery could need this,” says study senior author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. Brown is also an anesthesiologist at Massachusetts General Hospital and a professor of anesthesiology at Harvard Medical School. “So might any patient undergoing a more normal surgery but who might have a compromised cardiovascular system, such as ischemic heart disease. You can’t have the blood pressure being all over the place.”
The study’s lead author is electrical engineering and computer science (EECS) graduate student Taylor Baum, who is co-supervised by Brown and Munther Dahleh, the William A. Coolidge Professor in EECS.
Algorithmic advance
The idea that cardiac output and systemic resistance are the two key components of blood pressure comes from the two-element Windkessel model. The new study is not the first to use the model to estimate these components from blood pressure measurements, but previous attempts ran into a trade-off between quick estimate updates and the accuracy of estimates; methods would either provide more erroneous estimates at every beat or more reliable estimates that are updated at minute time scales. Led by Baum, the MIT team overcame the trade-off with a new approach of applying statistical and signal processing techniques such as “state-space” modeling.
“Our estimates, updated at every beat, are not just informed by the current beat; but they incorporate where things were in previous beats as well,” Baum says. “It’s that combination of past history and current observations that produces a more reliable estimate while still at a beat-by-beat time scale.”
Notably, the resulting estimates of cardiac output and systemic resistance are “proportional,” meaning that they are each inextricably linked in the math with another co-factor, rather than estimated on their own. But application of the new method to data collected in an older study from six animals showed that the proportional estimates from recordings using minimally invasive catheters provide comparable information for cardiovascular system management.
One key finding was that the proportional estimates made based on arterial blood pressure readings from catheters inserted in various locations away from the heart (e.g., the leg or the arm) mirrored estimates derived from more invasive catheters placed within the aorta. The significance of the finding is that a system using the new estimation method could in some cases rely on a minimally invasive catheter in various peripheral arteries, thereby avoiding the need for a riskier placement of a central artery catheter or a pulmonary artery catheter directly in the heart, the clinical gold standard for cardiovascular state estimation.
Another key finding was that when the animals received each of five drugs that doctors use to regulate either systemic vascular resistance or cardiac output, the proportional estimates tracked the resulting changes properly. The finding therefore suggests that the proportional estimates of each factor are accurately reflecting their physiological changes.
Toward the clinic
With these encouraging results, Baum and Brown say, the current method can be readily implemented in clinical settings to inform perioperative care teams about underlying causes of critical blood pressure changes. They are actively pursuing regulatory approval of use of this method in a clinical device.
Additionally, the researchers are pursuing more animal studies to validate an advanced blood pressure management approach that uses this method. They have developed a closed-loop system, informed by this estimation framework, to precisely regulate blood pressure in an animal model. Upon completion of the animal studies, they will apply for regulatory clearance to test the system in humans.
In addition to Baum, Dahleh and Brown, the paper’s other authors are Elie Adam, Christian Guay, Gabriel Schamberg, Mohammadreza Kazemi, and Thomas Heldt.
The National Science Foundation, the National Institutes of Health, a Mathworks Fellowship, The Picower Institute for Learning and Memory, and The JPB Foundation supported the study.
4 notes · View notes
xtruss · 7 months
Text
This Tiny, Tamper-Proof ID Tag Can Authenticate Almost Anything
Massachusetts Institute of Technology (MIT) Engineers Developed a Tag That Can Reveal with Near-Perfect Accuracy Whether an Item is Real or Fake. The Key is in the Glue on the Back of the Tag.
— Adam Zewe | MIT News | Publication Date: February 18, 2024
Tumblr media
A Few Years Ago, MIT Researchers Invented a Cryptographic ID Tag that is several times smaller and significantly cheaper than the traditional radio frequency tags (RFIDs) that are often affixed to products to verify their authenticity.
This tiny tag, which offers improved security over RFIDs, utilizes terahertz waves, which are smaller and travel much faster than radio waves. But this terahertz tag shared a major security vulnerability with traditional RFIDs: A counterfeiter could peel the tag off a genuine item and reattach it to a fake, and the authentication system would be none the wiser.
The researchers have now surmounted this security vulnerability by leveraging terahertz waves to develop an antitampering ID tag that still offers the benefits of being tiny, cheap, and secure.
They mix microscopic metal particles into the glue that sticks the tag to an object, and then use terahertz waves to detect the unique pattern those particles form on the item’s surface. Akin to a fingerprint, this random glue pattern is used to authenticate the item, explains Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on the antitampering tag.
“These metal particles are essentially like mirrors for terahertz waves. If I spread a bunch of mirror pieces onto a surface and then shine light on that, depending on the orientation, size, and location of those mirrors, I would get a different reflected pattern. But if you peel the chip off and reattach it, you destroy that pattern,” adds Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group in the Research Laboratory of Electronics.
The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy.
Because the terahertz tag is so cheap to produce, it could be implemented throughout a massive supply chain. And its tiny size enables the tag to attach to items too small for traditional RFIDs, such as certain medical devices.
The paper, which will be presented at the IEEE Solid State Circuits Conference, is a collaboration between Han’s group and the Energy-Efficient Circuits and Systems Group of Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer, dean of the MIT School of Engineering, and the Vannevar Bush Professor of EECS. Co-authors include EECS graduate students Xibi Chen, Maitryi Ashok, and Jaeyeon Won.
Preventing Tampering
This research project was partly inspired by Han’s favorite car wash. The business stuck an RFID tag onto his windshield to authenticate his car wash membership. For added security, the tag was made from fragile paper so it would be destroyed if a less-than-honest customer tried to peel it off and stick it on a different windshield.
But that is not a terribly reliable way to prevent tampering. For instance, someone could use a solution to dissolve the glue and safely remove the fragile tag.
Rather than authenticating the tag, a better security solution is to authenticate the item itself, Han says. To achieve this, the researchers targeted the glue at the interface between the tag and the item’s surface.
Their antitampering tag contains a series of miniscule slots that enable terahertz waves to pass through the tag and strike microscopic metal particles that have been mixed into the glue.
Terahertz waves are small enough to detect the particles, whereas larger radio waves would not have enough sensitivity to see them. Also, using terahertz waves with a 1-millimeter wavelength allowed the researchers to make a chip that does not need a larger, off-chip antenna.
After passing through the tag and striking the object’s surface, terahertz waves are reflected, or backscattered, to a receiver for authentication. How those waves are backscattered depends on the distribution of metal particles that reflect them.
The researchers put multiple slots onto the chip so waves can strike different points on the object’s surface, capturing more information on the random distribution of particles.
“These responses are impossible to duplicate, as long as the glue interface is destroyed by a counterfeiter,” Han says.
A vendor would take an initial reading of the antitampering tag once it was stuck onto an item, and then store those data in the cloud, using them later for verification.
AI For Authentication
But when it came time to test the antitampering tag, Lee ran into a problem: It was very difficult and time-consuming to take precise enough measurements to determine whether two glue patterns are a match.
He reached out to a friend in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and together they tackled the problem using AI. They trained a machine-learning model that could compare glue patterns and calculate their similarity with more than 99 percent accuracy.
“One drawback is that we had a limited data sample for this demonstration, but we could improve the neural network in the future if a large number of these tags were deployed in a supply chain, giving us a lot more data samples,” Lee says.
The authentication system is also limited by the fact that terahertz waves suffer from high levels of loss during transmission, so the sensor can only be about 4 centimeters from the tag to get an accurate reading. This distance wouldn’t be an issue for an application like barcode scanning, but it would be too short for some potential uses, such as in an automated highway toll booth. Also, the angle between the sensor and tag needs to be less than 10 degrees or the terahertz signal will degrade too much.
They plan to address these limitations in future work, and hope to inspire other researchers to be more optimistic about what can be accomplished with terahertz waves, despite the many technical challenges, says Han.
“One thing we really want to show here is that the application of the terahertz spectrum can go well beyond broadband wireless. In this case, you can use terahertz for ID, security, and authentication. There are a lot of possibilities out there,” he adds.
This work is supported, in part, by the U.S. National Science Foundation and the Korea Foundation for Advanced Studies.
0 notes
berkeley-engineering · 10 months
Text
Meet the Berkeley chapter of Eta Kappa Nu (HKN)! This electrical and computer engineering honor society provides tutoring and peer advising to fellow undergraduates. 
···
Your Gifts, #ShapingVisionaries: The Berkeley Engineering Fund (BEF) is proud to provide funding for Blue and Gold Certified student orgs like HKN.
BEF helps the college thrive. Thank you, BEF donors, for funding innovative student programs, packages to attract new faculty and continually modernizing facilities. Together, we shape visionaries. You can double your impact with our Shaping Visionaries Challenge Match.
Pictured: A blue graphic with white text that reads, “One thing you want students to know about your club?” Then an HKN member speaks to the camera at Golden Bear Orientation.
0 notes
lboogie1906 · 4 months
Text
Tumblr media
Gary Stephen May (born May 17, 1964) is the second African American chancellor of a campus of the University of California.
He was born in St. Louis, one of the two children of Warren May Jr., a postal clerk, and Gloria May, an elementary school teacher. Though raised a Methodist he attended a Catholic elementary school and a Lutheran high school. Selected as a US Presidential Scholar and having participated in a summer program directed by the McDonnell-Douglas Corporation, he was persuaded to enter The Georgia Institute of Technology where he majored in Electrical Engineering, he graduated magnum cum laude in 1988.
He enrolled in the doctoral program in Electrical Engineering and Computer Science at UC Berkeley, completing his Ph.D. with a dissertation titled “Automated Malfunction Diagnosis of Integrated Circuit Manufacturing Equipment.” He was involved with the National Society of Black Engineers, serving as its national chairperson (1987-89).
He returned to Georgia Tech as an EECS assistant professor. Promoted to full professor, he was named the Motorola Foundation Professor of Microelectronics. He was made chairman of EECS when the department was ranked sixth in the nation in that field. He was appointed Dean of the College of Engineering at Georgia Tech, the first African American in that post. He devised a very effective summer program that brought hundreds of such undergraduate students from across the nation to Georgia Tech to perform research and to encourage them to commit to graduate study. He co-created and directed a National Science Foundation-funded program on his campus that significantly increased the school’s students of color doctoral graduates in STEM fields. His known academic works are the books Fundamentals of Semiconductor Fabrication and Fundamental of Semiconductor Manufacturing and Process Control.
He was chosen to become the seventh chancellor of the UC Davis. He and his wife, LeShelle, a former system/ software developer, have two daughters. #africanhistory365 #africanexcellence
2 notes · View notes
mit · 11 months
Text
A new record for Math Prize for Girls wins
Twelfth grader Jessica Wan three-peats, as MIT hosts the 15th competition for female middle and high school math enthusiasts.
Sandi Miller | Department of Mathematics
Tumblr media
Florida Virtual School senior Jessica Wan was the winner of the 15th Math Prize for Girls (MP4G) annual contest for female-identifying contestants, held Oct. 6-8 at MIT. 
She scored 17 out of 20 questions, which added up to make Wan the MP4G’s most successful contestant in its history; she also won the contest last year and in 2019, as an eighth grader. (MP4G paused for two years at the height of the Covid-19 pandemic.) Because Wan had won $82,000 in previous years, she was limited to only earning $18,000 this year by contest rules placing a $100,000 lifetime limit on winnings.
The 262 U.S. and Canadian middle and high school contestants took a two-and-a-half-hour exam that featured 20 multistage problems in geometry, algebra, and trigonometry. Here's an example of one of the questions:
The frame of a painting has the form of a 105” by 105” square with a 95” by 95” square removed from its center. The frame is built out of congruent isosceles trapezoids with angles measuring 45 degrees and 135 degrees. Each trapezoid has one base on the frame’s outer edge and one base on the frame’s inner edge. Each outer edge of the frame contains an odd number of trapezoid bases that alternate long, short, long, short, etc. What is the maximum possible number of trapezoids in the frame?
Hosted by the MIT Department of Mathematics and sponsored by the Advantage Testing Foundation and global trading firm Jane Street, the MP4G weekend culminated in an awards ceremony held at the Marriott in Kendall Square, Cambridge, Massachusetts. MIT electrical engineering and computer science (EECS) Professor Regina Barzilay gave the Maryam Mirzakhani keynote lecture, “Uncovering Mysteries of Life Sciences with Machine Learning.” The event was emceed by MP4G alumna In Young Cho, a quantitative trader from Jane Street who placed third in 2010, and featured a performance by the MIT Logarhythms.
In second place was eighth grader Selena Ge of Jonas Clarke Middle School in Lexington, Massachusetts, with a score of 14 to earn $20,000. She also was awarded a Youth Prize of $2,000 as the highest-scoring contestant in ninth grade or below.
The next four winners were junior Hannah Fox of Proof School in California, who received $10,000 with a score of 12; with scores of 11 each, $4,000 was awarded to sophomores Shruti Arun of Cherry Creek High School in Colorado, Catherine Xu of Iowa City West High School in Iowa, and senior Miranda Wang of Kent Place School in New Jersey. The next 12 winners received $1,000 each.
Tumblr media
The top 41 students are invited to take the 2023 Math Prize for Girls Olympiad at their schools. Canada/USA Mathcamp also provides $250 merit scholarships to the top 35 students who enroll in its summer program.
Keep reading.
Make sure to follow us on Tumblr!
4 notes · View notes
moremedtech · 1 year
Text
Atlas of human brain blood vessels highlights changes in Alzheimer’s disease
Tumblr media
Atlas of human brain blood vessels highlights changes in Alzheimer’s disease MIT researchers characterize gene expression patterns for 22,500 brain vascular cells across 428 donors, revealing insights for Alzheimer’s onset and potential treatments. Your brain is powered by 400 miles of blood vessels that provide nutrients, clear out waste products, and form a tight protective barrier — the blood-brain barrier — that controls which molecules can enter or exit. However, it has remained unclear how these brain vascular cells change between brain regions, or in Alzheimer’s disease, at single-cell resolution. To address this challenge, a team of scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), The Picower Institute for Learning and Memory, and The Broad Institute of MIT and Harvard recently unveiled a systematic molecular atlas of human brain vasculature and its changes in Alzheimer’s disease (AD) across six brain regions, in a paper published June 1 in Nature Neuroscience. Alzheimer's disease is a leading cause of death, affects one in nine Americans over 65, and leads to debilitating and devastating cognitive decline. Impaired blood-brain barrier (BBB) function has long been associated with Alzheimer’s and other neurodegenerative diseases, such as Parkinson's and multiple sclerosis. However, the molecular and cellular underpinnings of BBB dysregulation remain ill-defined, particularly at single-cell resolution across multiple brain regions and many donors.
Navigating vascular complexity
Embarking deep into the complexities of our gray matter, the researchers created a molecular atlas of human brain vasculature across 428 donors, including 220 diagnosed with Alzheimer's and 208 controls. They characterized over 22,514 vascular cells from six different brain regions, measuring the expression of thousands of genes for each cell. The resulting datasets unveiled intriguing changes in gene expression across different brain regions and stark contrasts between individuals afflicted with AD and those without. “Alzheimer's therapy development faces a significant hurdle — brain alterations commence decades before cognitive signs make their debut, at which point it might already be too late to intervene effectively,” comments MIT CSAIL principal investigator and electrical engineering and computer science (EECS) Professor Manolis Kellis. “Our work charts the terrain of vascular changes, one of the earliest markers of Alzheimer's, across multiple brain regions, providing a map to guide biological and therapeutic investigations earlier in disease progression.” Kellis is the study's co-senior author, along with MIT Professor Li-Huei Tsai, director of the Picower Institute and the Picower Professor in the Department of Brain and Cognitive Sciences.
The little cells that could
The threads of our human brain vasculature, and every part of our brain and body, are composed of millions of cells, all sharing the same DNA code, but each expressing a different subset of genes, which define its functional roles and distinct cell type. Using the distinct gene expression signatures of different cerebrovascular cells, the researchers distinguished 11 types of vascular cells. These included endothelial cells that line the interior surface of blood vessels and control which substances pass through the BBB, pericytes that wrap around small vessels and provide structural support and blood flow control, smooth muscle cells that form the middle layer of large vessels and whose contraction and relaxation regulates blood flow and pressure, fibroblasts that surround blood vessels and hold them in place, and they distinguished arteriole, venule, and capillary veins responsible for the different stages of blood oxygen exchange. The abundance of these vascular cell types differed between brain regions, with neocortical regions showing more capillary endothelial cells and fewer fibroblasts than subcortical regions, highlighting the regional heterogeneity of the BBB.
Clues and suspects
Armed with these annotations, the next phase was studying how each of these cell types change in AD, revealing 2,676 genes whose expression levels change significantly. They found that capillary endothelial cells, responsible for transport, waste removal, and immune surveillance, showed the most changes in AD, including genes involved in clearance of amyloid beta, one of the pathological hallmarks of AD, providing insights on the potential mechanistic implications of vascular dysregulation on AD pathology. Other dysregulated processes included immune function, glucose homeostasis, and extracellular matrix organization, which were all shared among multiple vascular cell types, and also cell-type-specific changes, including growth factor receptors in pericytes, and transporter and energy in endothelial cells, and cellular response to amyloid beta in smooth muscle cells. Regulation of insulin sensing and glucose homeostasis in particular suggested important connections between lipid transport and Alzheimer’s regulated by the vasculature and blood-brain-barrier cells, which could hold promise for new therapeutic clues. “Single-cell RNA sequencing provides an extraordinary microscope to peer into the intricate machinery of life, and ‘see’ millions of RNA molecules bustling with activity within each cell,” says Kellis, who is also a member of the Broad Institute. “This level of detail was inconceivable just a few years ago, and the resulting insights can be transformative to comprehend and combat complex psychiatric and neurodegenerative disease."
Maestros of dysregulation
Genes do not act on a whim, and they do not act alone. Cellular processes are governed by a complex cast of regulators, or transcription factors, that dictate which groups of genes should be turned on or off in different conditions, and in different cell types. These regulators are responsible for interpreting our genome, the ‘book of life,’ and turning it into the myriad of distinct cell types in our bodies and in our brains. These regulators might be responsible when something goes wrong, and they could also be critical in fixing things and restoring healthy cellular states. With thousands of genes showing altered expression levels in Alzheimer’s disease, the researchers then sought to find the potential masterminds behind these changes. They asked if common regulatory control proteins target numerous altered genes, which may provide candidate therapeutic targets to restore the expression levels of large numbers of target genes. Indeed, they found several such ‘master controllers,’ involved in regulating endothelial differentiation, inflammatory response, and epigenetic state, providing potential intervention points for drug targets against AD.
Cellular murmurings
Cells do not function in isolation; rather, they rely on communication with each other to coordinate biological processes. This intercellular communication is particularly complex within the cellular diversity of the brain, given the many factors involved in sensing, memory formation, knowledge integration, and consciousness. In particular, vascular cells have intricate interactions with neurons, microglia, and other brain cells, which take on heightened significance during pathological events, such as in Alzheimer's disease, where dysregulation of this cellular communication can contribute to the progression of the disease. The researchers found that interactions from capillary endothelial cells to neurons, microglia, and astrocytes were highly increased in AD, while interactions in the reverse direction, from neurons and astrocytes to capillary endothelial cells, were decreased in AD. This asymmetry could provide important cues for potential interventions targeting the vasculature and specifically capillary endothelial cells, with ultimate broad positive impacts on the brain. “The dynamics of vascular cell interactions in AD provide an entry point for brain interventions and potential new therapies,” says Na Sun, an EECS graduate student and MIT CSAIL affiliate and first author on the study. “As the blood-brain barrier prevents many drugs from influencing the brain, perhaps we could instead manipulate the blood-brain barrier itself, and let it spread beneficiary signals to the rest of the brain. Our work provides a blueprint for cerebrovasculature interventions in Alzheimer's disease, by unraveling how cellular communication can mediate the impact of genetic variants in AD."
Going off script: genetic plot twists
Disease onset in our bodies (and in our brains) is shaped by a combination of genetic predispositions and environmental exposures. On the genetic level, most complex traits are shaped by hundreds of minuscule sequence alterations, known as single-nucleotide polymorphisms (or SNPs, pronounced snips), most of which act through subtle changes in gene expression levels. No matter how subtle their effects might be, these genetic changes can reveal causal contributors to disease, which can greatly increase the chance of therapeutic success for genetically-supported target genes, compared to targets lacking genetic support. To understand how genetic differences associated with Alzheimer’s might act in the vasculature, the researchers then sought to connect genes that showed altered expression in Alzheimer’s with genetic regions associated with increased Alzheimer’s risk through genetic studies of thousands of individuals. They linked the genetic variants (SNPs) to candidate target genes using three lines of evidence: physical proximity in the three-dimensional folded genome, genetic variants that affect gene expression, and correlated activity between distant regulatory regions and target genes that go on and off together between different conditions. This resulted in not just one hit, but 125 genetic regions, where Alzheimer’s-associated genetic variants were linked to genes with disrupted expression patterns in Alzheimer’s disease, suggesting they might mediate these causal genetic effects, and thus may be good candidates for therapeutic targeting. Some of these predicted hits were direct, where the genetic variant acted directly on a nearby gene. Others were indirect when the genetic variant instead affected the expression of a regulator, which then affected the expression of its target genes. And yet others were predicted to be indirect through cell-cell communication networks.
ApoE4 and cognitive decline
While most genetic effects are subtle, both in Alzheimer’s and nearly all complex disorders, exceptions do exist. One such exception is FTO in obesity, which increases obesity risk by one standard deviation. Another one is apolipoprotein E (ApoE) in Alzheimer’s disease, where the E4 versus E3 allele increases risk more than 10-fold for carriers of two risk alleles — those who inherited one ‘unlucky’ copy from each parent. With such a strong effect size, the researchers then asked if ApoE4 carriers showed specific changes in vascular cells that were not found in ApoE3 carriers. Indeed, they found abundance changes associated with the ApoE4 genotype, with capillary endothelial cells and pericytes showing extensive down-regulation of transport genes. This has important implications for potential preventive treatments targeting transport in ApoE4 carriers, especially given the cholesterol transporter roles of ApoE, and the increasingly recognized role of lipid metabolism in Alzheimer’s disease. "Unearthing these AD-differential genes gives us a glimpse into how they may be implicated in the deterioration or dysfunction of the brain's protective barrier in Alzheimer's patients, shedding light on the molecular and cellular roots of the disease's development," says Kellis. "They also open several avenues for therapeutic development, hinting at a future where these entry points might be harnessed for new Alzheimer's treatments targeting the blood-brain barrier directly. The possibility of slowing or even halting the disease's progression is truly exciting.” Translating these findings into viable therapeutics will be a journey of exploration, demanding rigorous preclinical and clinical trials. To bring these potential therapies to patients, scientists need to understand how to target the discovered dysregulated genes safely and effectively and determine whether modifying their activity can ameliorate or reverse AD symptoms, which requires extensive collaborations between medical doctors and engineers across both academia and industry. “This is a tour de force impressive case series,” says Elizabeth Head, vice chair for pathology research and pathology professor at the University of California at Irvine, who was not involved in the research. “A novel aspect of this study was also the methodological approach, which left the vasculature intact, as compared to previous work where blood vessel enrichment protocol was applied. Manolis Kellis and his colleagues show clear evidence of neurovascular unit dysregulation in AD and it is exciting to see known and novel pathways being identified that will accelerate discoveries at the protein level. Many DEGs associated with AD are linked to lipid/cholesterol metabolism, to AD genetic risk factors (including ApoE) and inflammation. The potential for the ApoE genotype in mediating cerebrovascular function will also lead to possible new mouse models that will capture the human phenotype more closely with respect to the vascular contributions to dementia in humans. The regional differences in DEGs are fascinating and will guide future neuropathology studies in the human brain and drive novel hypotheses.” "The predominant focus in AD research over the past 10 years has been on studying microglia, the resident macrophage-like cells of the brain,” adds Ryan Corces, an assistant professor of neurology at the University of California at San Francisco who was also not involved in the work. “While microglia certainly play a key role in disease pathogenesis, it has become increasingly clear through studies such as this one that vascular cells may also be critically involved in the disease. From blood-brain barrier leakage to an enhanced need for debris clearance, the vascular cells of the brain play an important part in this complex disease. This study, and others like it, have begun picking apart the underlying molecular changes that occur in vascular cells, showing which genes appear dysregulated and how those changes may interact to alter vascular cell functions. Together with the mounting evidence of vascular involvement in AD, this work provides an important foundation for guiding therapeutic interventions against blood-brain barrier dysfunction in AD, especially during the preclinical or prodromal stages of the disease, where the blood-brain barrier may be playing a central role.” Sun, Kellis, and Tsai wrote the paper alongside Leyla Anne Akay, Mitchell H. Murdock, Yongjin Park, Fabiola Galiana-Melendez, Adele Bubnys, Kyriaki Galani, Hansruedi Mathys, Xueqiao Jiang, and Ayesha P. Ng of MIT and David A. Bennett of the Rush Alzheimer’s Disease Center in Chicago. This work was supported, in part, by National Institutes of Health grants, the Cure Alzheimer’s Foundation CIRCUITS consortium, the JPB Foundation, Robert A. and Renee Belfer, and a Takeda Fellowship from the Takeda Pharmaceutical Company. Source: MIT Read the full article
3 notes · View notes
sunaleisocial · 23 days
Text
A framework for solving parabolic partial differential equations
New Post has been published on https://sunalei.org/news/a-framework-for-solving-parabolic-partial-differential-equations/
A framework for solving parabolic partial differential equations
Tumblr media
Computer graphics and geometry processing research provide the tools needed to simulate physical phenomena like fire and flames, aiding the creation of visual effects in video games and movies as well as the fabrication of complex geometric shapes using tools like 3D printing.
Under the hood, mathematical problems called partial differential equations (PDEs) model these natural processes. Among the many PDEs used in physics and computer graphics, a class called second-order parabolic PDEs explain how phenomena can become smooth over time. The most famous example in this class is the heat equation, which predicts how heat diffuses along a surface or in a volume over time.
Researchers in geometry processing have designed numerous algorithms to solve these problems on curved surfaces, but their methods often apply only to linear problems or to a single PDE. A more general approach by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) tackles a general class of these potentially nonlinear problems. 
In a paper recently published in the Transactions on Graphics journal and presented at the SIGGRAPH conference, they describe an algorithm that solves different nonlinear parabolic PDEs on triangle meshes by splitting them into three simpler equations that can be solved with techniques graphics researchers already have in their software toolkit. This framework can help better analyze shapes and model complex dynamical processes.
“We provide a recipe: If you want to numerically solve a second-order parabolic PDE, you can follow a set of three steps,” says lead author Leticia Mattos Da Silva SM ’23, an MIT PhD student in electrical engineering and computer science (EECS) and CSAIL affiliate. “For each of the steps in this approach, you’re solving a simpler problem using simpler tools from geometry processing, but at the end, you get a solution to the more challenging second-order parabolic PDE.”
To accomplish this, Da Silva and her coauthors used Strang splitting, a technique that allows geometry processing researchers to break the PDE down into problems they know how to solve efficiently.
First, their algorithm advances a solution forward in time by solving the heat equation (also called the “diffusion equation”), which models how heat from a source spreads over a shape. Picture using a blow torch to warm up a metal plate — this equation describes how heat from that spot would diffuse over it. 
This step can be completed easily with linear algebra.
Now, imagine that the parabolic PDE has additional nonlinear behaviors that are not described by the spread of heat. This is where the second step of the algorithm comes in: it accounts for the nonlinear piece by solving a Hamilton-Jacobi (HJ) equation, a first-order nonlinear PDE. 
While generic HJ equations can be hard to solve, Mattos Da Silva and coauthors prove that their splitting method applied to many important PDEs yields an HJ equation that can be solved via convex optimization algorithms. Convex optimization is a standard tool for which researchers in geometry processing already have efficient and reliable software. In the final step, the algorithm advances a solution forward in time using the heat equation again to advance the more complex second-order parabolic PDE forward in time.

Among other applications, the framework could help simulate fire and flames more efficiently. “There’s a huge pipeline that creates a video with flames being simulated, but at the heart of it is a PDE solver,” says Mattos Da Silva. For these pipelines, an essential step is solving the G-equation, a nonlinear parabolic PDE that models the front propagation of the flame and can be solved using the researchers’ framework.
The team’s algorithm can also solve the diffusion equation in the logarithmic domain, where it becomes nonlinear. Senior author Justin Solomon, associate professor of EECS and leader of the CSAIL Geometric Data Processing Group, previously developed a state-of-the-art technique for optimal transport that requires taking the logarithm of the result of heat diffusion. Mattos Da Silva’s framework provided more reliable computations by doing diffusion directly in the logarithmic domain. This enabled a more stable way to, for example, find a geometric notion of average among distributions on surface meshes like a model of a koala.
Even though their framework focuses on general, nonlinear problems, it can also be used to solve linear PDE. For instance, the method solves the Fokker-Planck equation, where heat diffuses in a linear way, but there are additional terms that drift in the same direction heat is spreading. In a straightforward application, the approach modeled how swirls would evolve over the surface of a triangulated sphere. The result resembles purple-and-brown latte art.
The researchers note that this project is a starting point for tackling the nonlinearity in other PDEs that appear in graphics and geometry processing head-on. For example, they focused on static surfaces but would like to apply their work to moving ones, too. Moreover, their framework solves problems involving a single parabolic PDE, but the team would also like to tackle problems involving coupled parabolic PDE. These types of problems arise in biology and chemistry, where the equation describing the evolution of each agent in a mixture, for example, is linked to the others’ equations.
Mattos Da Silva and Solomon wrote the paper with Oded Stein, assistant professor at the University of Southern California’s Viterbi School of Engineering. Their work was supported, in part, by an MIT Schwarzman College of Computing Fellowship funded by Google, a MathWorks Fellowship, the Swiss National Science Foundation, the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation, MIT-IBM Watson AI Lab, the Toyota-CSAIL Joint Research Center, Adobe Systems, and Google Research.
0 notes
scarubaru · 1 year
Text
There was a post I saw a while ago about the gap between comp-sci/engineering students’ aspirations towards knowledge in their field vs those of their professors. Basically, students in other fields like physics or EECS are trying to master the foundations of their field, while comp-sci and comp-engineering students just want to churn out code without a deeper understanding of why things work the way they do. 
And yes completely agree. It’s like. If mechanical engineers didn’t know Newton’s laws, or an electrical engineer said shit like ‘Maxwell’s equations? Never heard of them.’ Not everyone will be a researcher of course, but cmon. Know your shit. I think the lack of that foundation in computer science students contributes to the shoddy state of modern software.
0 notes
rjdavies · 2 years
Text
Celebrating Amazing Women for the Month of March: Limor Fried
Celebrating Inspirational Women throughout the month of March.
Tumblr media
Limor “Ladyada” Fried is an American Electrical Engineering and Computer Scientist as well as an entrepreneur and founder of Adafruit. 
Limor Fried is an engineer, open source hardware and software pioneer, and entrepreneur. She earned a bachelor’s degree in electrical engineering and computer science (EECS) and a master’s degree in EECS at MIT. 
She founded the educational electronics company Adafruit from her dorm room at MIT in 2005! She had the idea to create the best place online for learning electronics and making the best-designed products for makers of all ages and skill levels. 
Adafruit has grown to over 100+ employees in the heart of New York City and has expanded offerings to include tools, equipment, and electronics that Limor personally selects, tests, and approves before going into the Adafruit store. 
Fried was the first female engineer to appear on the cover of WIRED and named a White House Champion of Change in 2016. In 2018 Limor became one of “America’s Top 50 Women in Tech” by Forbes magazine.
R. J. Davies
A Riveting Jacked-In Dreamy Mind-Bender
RJ Davies - Science Fiction Author, Maddox Files, Novels
0 notes
thebourisbox · 2 years
Text
Breaking the scaling limits of analog computing
See on Scoop.it - Design, Science and Technology
A new technique greatly reduces the error in an optical neural network, which uses light to process data instead of electrical signals. With their technique, the larger an optical neural network becomes, the lower the error in its computations. This could enable them to scale these devices up so they would be large enough for commercial uses.
  As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.
  An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.
  However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.
  Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.
  MIT researchers have overcome this hurdle and found a way to effectively scale an optical neural network. By adding a tiny hardware component to the optical switches that form the network’s architecture, they can reduce even the uncorrectable errors that would otherwise accumulate in the device.
  Their work could enable a super-fast, energy-efficient, analog neural network that can function with the same accuracy as a digital one. With this technique, as an optical circuit becomes larger, the amount of error in its computations actually decreases.  
“This is remarkable, as it runs counter to the intuition of analog systems, where larger circuits are supposed to have higher errors, so that errors set a limit on scalability. This present paper allows us to address the scalability question of these systems with an unambiguous ‘yes,’” says lead author Ryan Hamerly, a visiting scientist in the MIT Research Laboratory for Electronics (RLE) and Quantum Photonics Laboratory and senior scientist at NTT Research.
  Hamerly’s co-authors are graduate student Saumil Bandyopadhyay and senior author Dirk Englund, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), leader of the Quantum Photonics Laboratory, and member of the RLE. The research is published today in Nature Communications.
Read the full article at: news.mit.edu
0 notes
damiencordle · 2 years
Text
I Found This Interesting. Joshua Damien Cordle
New computing architecture: Deep learning with light
A new method uses optics to accelerate machine-learning computations on smart speakers and other low-power connected devices
Ask a smart home device for the weather forecast, and it takes several seconds for the device to respond. One reason this latency occurs is because connected devices don't have enough memory or power to store and run the enormous machine-learning models needed for the device to understand what a user is asking of it. The model is stored in a data center that may be hundreds of miles away, where the answer is computed and sent to the device.
MIT researchers have created a new method for computing directly on these devices, which drastically reduces this latency. Their technique shifts the memory-intensive steps of running a machine-learning model to a central server where components of the model are encoded onto light waves.
The waves are transmitted to a connected device using fiber optics, which enables tons of data to be sent lightning-fast through a network. The receiver then employs a simple optical device that rapidly performs computations using the parts of a model carried by those light waves.
This technique leads to more than a hundredfold improvement in energy efficiency when compared to other methods. It could also improve security, since a user's data do not need to be transferred to a central location for computation.
This method could enable a self-driving car to make decisions in real-time while using just a tiny percentage of the energy currently required by power-hungry computers. It could also allow a user to have a latency-free conversation with their smart home device, be used for live video processing over cellular networks, or even enable high-speed image classification on a spacecraft millions of miles from Earth.
"Every time you want to run a neural network, you have to run the program, and how fast you can run the program depends on how fast you can pipe the program in from memory. Our pipe is massive -- it corresponds to sending a full feature-length movie over the internet every millisecond or so. That is how fast data comes into our system. And it can compute as fast as that," says senior author Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and member of the MIT Research Laboratory of Electronics.
Joining Englund on the paper is lead author and EECS grad student Alexander Sludds; EECS grad student Saumil Bandyopadhyay, Research Scientist Ryan Hamerly, as well as others from MIT, the MIT Lincoln Laboratory, and Nokia Corporation. The research will be published in Science.
Lightening the load
Neural networks are machine-learning models that use layers of connected nodes, or neurons, to recognize patterns in datasets and perform tasks, like classifying images or recognizing speech. But these models can contain billions of weight parameters, which are numeric values that transform input data as they are processed. These weights must be stored in memory. At the same time, the data transformation process involves billions of algebraic computations, which require a great deal of power to perform.
The process of fetching data (the weights of the neural network, in this case) from memory and moving them to the parts of a computer that do the actual computation is one of the biggest limiting factors to speed and energy efficiency, says Sludds.
"So our thought was, why don't we take all that heavy lifting -- the process of fetching billions of weights from memory -- move it away from the edge device and put it someplace where we have abundant access to power and memory, which gives us the ability to fetch those weights quickly?" he says.
The neural network architecture they developed, Netcast, involves storing weights in a central server that is connected to a novel piece of hardware called a smart transceiver. This smart transceiver, a thumb-sized chip that can receive and transmit data, uses technology known as silicon photonics to fetch trillions of weights from memory each second.
It receives weights as electrical signals and imprints them onto light waves. Since the weight data are encoded as bits (1s and 0s) the transceiver converts them by switching lasers; a laser is turned on for a 1 and off for a 0. It combines these light waves and then periodically transfers them through a fiber optic network so a client device doesn't need to query the server to receive them.
"Optics is great because there are many ways to carry data within optics. For instance, you can put data on different colors of light, and that enables a much higher data throughput and greater bandwidth than with electronics," explains Bandyopadhyay.
Trillions per second
Once the light waves arrive at the client device, a simple optical component known as a broadband "Mach-Zehnder" modulator uses them to perform super-fast, analog computation. This involves encoding input data from the device, such as sensor information, onto the weights. Then it sends each individual wavelength to a receiver that detects the light and measures the result of the computation.
The researchers devised a way to use this modulator to do trillions of multiplications per second, which vastly increases the speed of computation on the device while using only a tiny amount of power.
"In order to make something faster, you need to make it more energy efficient. But there is a trade-off. We've built a system that can operate with about a milliwatt of power but still do trillions of multiplications per second. In terms of both speed and energy efficiency, that is a gain of orders of magnitude," Sludds says.
They tested this architecture by sending weights over an 86-kilometer fiber that connects their lab to MIT Lincoln Laboratory. Netcast enabled machine-learning with high accuracy -- 98.7 percent for image classification and 98.8 percent for digit recognition -- at rapid speeds.
"We had to do some calibration, but I was surprised by how little work we had to do to achieve such high accuracy out of the box. We were able to get commercially relevant accuracy," adds Hamerly.
Moving forward, the researchers want to iterate on the smart transceiver chip to achieve even better performance. They also want to miniaturize the receiver, which is currently the size of a shoe box, down to the size of a single chip so it could fit onto a smart device like a cell phone.
The research is funded, in part, by NTT Research, the National Science Foundation, the Air Force Office of Scientific Research, the Air Force Research Laboratory, and the Army Research Office.
Story Source:
Materials provided by Massachusetts Institute of Technology. Original written by Adam Zewe. Note: Content may be edited for style and length.
Journal Reference:
Alexander Sludds, Saumil Bandyopadhyay, Zaijun Chen, Zhizhen Zhong, Jared Cochrane, Liane Bernstein, Darius Bunandar, P. Ben Dixon, Scott A. Hamilton, Matthew Streshinsky, Ari Novack, Tom Baehr-Jones, Michael Hochberg, Manya Ghobadi, Ryan Hamerly, Dirk Englund. Delocalized photonic deep learning on the internet’s edge. Science, 2022; 378 (6617): 270 DOI: 10.1126/science.abq8271
0 notes
jcmarchi · 2 months
Text
New transistor’s superlative properties could have broad electronics applications
New Post has been published on https://thedigitalinsider.com/new-transistors-superlative-properties-could-have-broad-electronics-applications/
New transistor’s superlative properties could have broad electronics applications
Tumblr media Tumblr media
In 2021, a team led by MIT physicists reported creating a new ultrathin ferroelectric material, or one where positive and negative charges separate into different layers. At the time they noted the material’s potential for applications in computer memory and much more. Now the same core team and colleagues — including two from the lab next door — have built a transistor with that material and shown that its properties are so useful that it could change the world of electronics.
Although the team’s results are based on a single transistor in the lab, “in several aspects its properties already meet or exceed industry standards” for the ferroelectric transistors produced today, says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, who led the work with professor of physics Raymond Ashoori. Both are also affiliated with the Materials Research Laboratory.
“In my lab we primarily do fundamental physics. This is one of the first, and perhaps most dramatic, examples of how very basic science has led to something that could have a major impact on applications,” Jarillo-Herrero says.
Says Ashoori, “When I think of my whole career in physics, this is the work that I think 10 to 20 years from now could change the world.”
Among the new transistor’s superlative properties:
It can switch between positive and negative charges — essentially the ones and zeros of digital information — at very high speeds, on nanosecond time scales. (A nanosecond is a billionth of a second.)
It is extremely tough. After 100 billion switches it still worked with no signs of degradation.
The material behind the magic is only billionths of a meter thick, one of the thinnest of its kind in the world. That, in turn, could allow for much denser computer memory storage. It could also lead to much more energy-efficient transistors because the voltage required for switching scales with material thickness. (Ultrathin equals ultralow voltages.)
The work is reported in a recent issue of Science. The co-first authors of the paper are Kenji Yasuda, now an assistant professor at Cornell University, and Evan Zalys-Geller, now at Atom Computing. Additional authors are Xirui Wang, an MIT graduate student in physics; Daniel Bennett and Efthimios Kaxiras of Harvard University; Suraj S. Cheema, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and an affiliate of the Research Laboratory of Electronics; and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
What they did
In a ferroelectric material, positive and negative charges spontaneously head to different sides, or poles. Upon the application of an external electric field, those charges switch sides, reversing the polarization. Switching the polarization can be used to encode digital information, and that information will be nonvolatile, or stable over time. It won’t change unless an electric field is applied. For a ferroelectric to have broad application to electronics, all of this needs to happen at room temperature.
The new ferroelectric material reported in Science in 2021 is based on atomically thin sheets of boron nitride that are stacked parallel to each other, a configuration that doesn’t exist in nature. In bulk boron nitride, the individual layers of boron nitride are instead rotated by 180 degrees.
It turns out that when an electric field is applied to this parallel stacked configuration, one layer of the new boron nitride material slides over the other, slightly changing the positions of the boron and nitrogen atoms. For example, imagine that each of your hands is composed of only one layer of cells. The new phenomenon is akin to pressing your hands together then slightly shifting one above the other.
“So the miracle is that by sliding the two layers a few angstroms, you end up with radically different electronics,” says Ashoori. The diameter of an atom is about 1 angstrom.
Another miracle: “nothing wears out in the sliding,” Ashoori continues. That’s why the new transistor could be switched 100 billion times without degrading. Compare that to the memory in a flash drive made with conventional materials. “Each time you write and erase a flash memory, you get some degradation,” says Ashoori. “Over time, it wears out, which means that you have to use some very sophisticated methods for distributing where you’re reading and writing on the chip.” The new material could make those steps obsolete.
A collaborative effort
Yasuda, the co-first author of the current Science paper, applauds the collaborations involved in the work. Among them, “we [Jarillo-Herrero’s team] made the material and, together with Ray [Ashoori] and [co-first author] Evan [Zalys-Geller], we measured its characteristics in detail. That was very exciting.” Says Ashoori, “many of the techniques in my lab just naturally applied to work that was going on in the lab next door. It’s been a lot of fun.”
Ashoori notes that “there’s a lot of interesting physics behind this” that could be explored. For example, “if you think about the two layers sliding past each other, where does that sliding start?” In addition, says Yasuda, could the ferroelectricity be triggered with something other than electricity, like an optical pulse? And is there a fundamental limit to the amount of switches the material can make?
Challenges remain. For example, the current way of producing the new ferroelectrics is difficult and not conducive to mass manufacturing. “We made a single transistor as a demonstration. If people could grow these materials on the wafer scale, we could create many, many more,” says Yasuda. He notes that different groups are already working to that end.
Concludes Ashoori, “There are a few problems. But if you solve them, this material fits in so many ways into potential future electronics. It’s very exciting.”
This work was supported by the U.S. Army Research Office, the MIT/Microsystems Technology Laboratories Samsung Semiconductor Research Fund, the U.S. National Science Foundation, the Gordon and Betty Moore Foundation, the Ramon Areces Foundation, the Basic Energy Sciences program of the U.S. Department of Energy, the Japan Society for the Promotion of Science, and the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
2 notes · View notes
Link
"Machine learning system aims to determine if an information outlet is accurate or biased". Reblog with caption 🙃
3 notes · View notes
berkeley-engineering · 10 months
Text
Tumblr media
The grading system known as “mastery learning” seeks to facilitate a process that gives students more room to learn from their mistakes. The idea is that students can achieve thorough proficiency in a subject if they are given enough time.
Armando Fox and Dan Garcia, professors of electrical engineering and computer sciences (EECS), are behind UC Berkeley’s pilot run in the non-majors CS 10 class, an endeavor they’ve dubbed “A’s for All (as Time and Interest Allow).”
In a paper, the Berkeley team found that “one of the best predictors of student performance in our rigorous introductory CS courses is prior CS exposure.” The implementation in an introductory course has the potential to improve diversity in the program. 
Read our full story.
0 notes
sounds-of-some-day · 2 years
Text
Someone of Quora asked what courses Tony would have taken at MIT, and then someone at Quora gave an extremely in depth answer. So here is that answer for those of you looking to write MIT-Tony based fic, specifically taking place during the 80s (as classes on these topics have obviously changed somewhat over the decades). Short answer followed by longer more in depth answer under the cut:
Summary of his coursework:
Definites:
8.01 Physics I: Classical Physics
8.02 Physics II: E&M
18.01 Calculus I: Single Variable
18.02 Calculus II: Multi-variable
18.03 Differential Equations
18.06 Linear Algebra
3.091 (more likely) Introduction to Solid State Chemistry or 5.11 Principles of Chemical Science or 5.40 General Chemistry or 5.41 Introduction to Structure, Bonding and Mechanism [the modern equivalent is 5.111]
Economics? concentration or minor (or STS or TCD)
6.001 Structure and Interpretation of Computer Programs
6.002 Circuits and Electronics
6.003 Signals and Systems
6.004 Computational Structures
EE lab (probably 6.115)
6.012 Microelectronic Devices
6.013 EM and Applications
6.014 Electrodynamics
Some Intro to Quantum
6.041 or 18.313 or 18.440 Probability
18.04 Complex Variables with Applications
8.321, 8.322 (graduate-level) Quantum Theory I, II (A)
8.511 (graduate-level) Solid State Physics/Theory of Solids I
8.613J (graduate-level) Intro to Plasma Physics I (A)
8.333 (graduate-level) Statistical Mechanics I (A)
2.70 Introduction to Design [nowadays known as 2.007]
Probables:
2.01 Mechanics of Solids
2.02 Introduction to Systems Dynamics
2.03J Dynamics
2.06J Mechanical Vibration
2.20 Fluid Mechanics
2.31/2.32 Mechanical Behavior of Materials I+II
2.40 Thermodynamics
2.51 Heat and Mass Transfer
~.~.~
From Marvel’s link (Iron Man (Anthony Stark)) — Tony Stark would have been a Course 6–1 (Electrical Engineering). Given that he was a graduate in the late 1980’s [Class of 1987?] according to the movie (or in the 1940’s or 50’s according to the comic books). Let’s stick with the movie version, as I wouldn’t know the EE program in the 1940’s/50’s (I can see the course catalog or catalogue, I just don’t know what topics would be taught).
Key reference: Browsing MIT Course Catalogues by Issue Date (the 1983–1987 course catalogs) — this is the MIT Institute Archives by MIT Libraries
In the 1980’s, Tony Stark would have done his 17 GIR’s or more likely gotten Advanced Standing Exams on a number of them (back then, 18.01, 18.02, 8.01, 8.02, 3.091/5.40/5.41/5.11 i.e., the old 5.111 or 5.60 or 7.01). He probably did his humanities concentration or minor in Economics as there was no Management option (I can’t think of any other concentration that would fit his personality and interests better, although he may have done Science, Technology, and Society or Technology, Culture, and Development as his humanities concentration). Tony Stark would have also done 18.03 (Differential Equations) and 18.06 (Linear Algebra) (requirements for the 6–1 program) as well as 6.041 Probabilistic Systems Analysis and Applied Probability. I have no idea what Tony Stark would have taken as other humanities courses (back then, one would have had to take 3 humanities distribution subjects).
Stark would have taken the 6.001–6.004 series (6.001 Structure and Interpretation of Computer Programs is no longer offered in the current curriculum and has been replaced by 6.01). 6.002 = Circuits and Electronics; 6.003 = Signals, and Systems; 6.004 = Computational Structures.
Stark would have had to take at least one EECS lab (6.111 Intro to Digital Systems Laboratory or 6.101 Intro Electronics Laboratory or 6.115 — Microcomputer Project Laboratory or 6.114 or 6.141 or 6.142 or 6.151 etc.). I think Stark would have also done the rest of the EE program requirements at the time (6.012 Microelectronic Devices and Circuits, 6.013 Electromagnetics and Applications, 6.014: Electrodynamics, 6.017 or 8.211 Introduction to Quantum Physics)
Given that Tony Stark was highly mechanically oriented and very competitive, I’d imagine that he probably took a lot of the Mechanical Engineering core that didn’t have a Course 6 equivalent (2.01 Mechanics of Solids, 2.02 Introduction to Systems Dynamics, 2.03J Dynamics, 2.06J Mechanical Vibration, 2.20 Fluid Mechanics, 2.31/2.32 Mechanical Behavior of Materials I+II, 2.40 Thermodynamics, 2.51 Heat and Mass Transfer, 2.70 Introduction to Design [nowadays known as 2.007]). Actually, As Tony Stark (film) (on the Iron Man Wikia) indicates that Tony Stark won a robotics competition at MIT — the main one would have been 2.70 (which is now 2.007).
The two Master’s or SM degrees (graduate work) according to Sanderson, Peter (w), Candelario, Harry (p), Cheung, Jim (i). Iron Man: The Legend 1 (September 1996) were in Physics and Electrical Engineering.
For his graduate work in EE, there are way too many classes that would have interested him (i.e., EECS AAGS | MIT EECS [AAGS = Approved Advanced Graduate Subjects]). Like at least half of those classes …
On the Physics side, Tony Stark would have been most interested in the Solid State, Laser, Plasma, and Atomic Physics research area. He would have likely taken graduate Quantum Mechanics (8.321, 8.322), Solid State Physics (8.511), Plasma Physics (8.613J), Statistical Mechanics (8.333) — I believe these would have been the first-year SM requirements for Physics.
He would have written three theses: one undergraduate for EE, then one for each of his Master’s degrees (Physics, Electrical Engineering). Anne Hunter (the undergraduate course administrator for Course 6) has mentioned to me that the first person to graduate with the M.Eng. was in 1994, so this would have been after the time that Tony Stark would have graduated from MIT.
Additionally, Stark would have also passed his swim test and boat test, and probably taken martial arts Physical Education classes for the PE requirement (e.g., aikido, shotokan karate). He probably took sailing as well (didn’t he have a yacht in one of the comic books?)
As for grades, MIT does not award “valedictorian” or Latin honors of summa/magna cum laude, but inferring that Marvel says that Tony Stark was “summa cum laude,” I would assume he was a 5.0 GPA (straight A’s).
Source
109 notes · View notes
mit · 1 year
Text
AI model speeds up high-resolution computer vision
The system could improve image quality in video streaming or help autonomous vehicles identify road hazards in real-time.
Adam Zewe | MIT News
Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a more efficient computer vision model that vastly reduces the computational complexity of this task. Their model can perform semantic segmentation accurately in real-time on a device with limited hardware resources, such as the on-board computers that enable an autonomous vehicle to make split-second decisions.
youtube
Recent state-of-the-art semantic segmentation models directly learn the interaction between each pair of pixels in an image, so their calculations grow quadratically as image resolution increases. Because of this, while these models are accurate, they are too slow to process high-resolution images in real time on an edge device like a sensor or mobile phone.
The MIT researchers designed a new building block for semantic segmentation models that achieves the same abilities as these state-of-the-art models, but with only linear computational complexity and hardware-efficient operations.
The result is a new model series for high-resolution computer vision that performs up to nine times faster than prior models when deployed on a mobile device. Importantly, this new model series exhibited the same or better accuracy than these alternatives.
Not only could this technique be used to help autonomous vehicles make decisions in real-time, it could also improve the efficiency of other high-resolution computer vision tasks, such as medical image segmentation.
“While researchers have been using traditional vision transformers for quite a long time, and they give amazing results, we want people to also pay attention to the efficiency aspect of these models. Our work shows that it is possible to drastically reduce the computation so this real-time image segmentation can happen locally on a device,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing the new model.
He is joined on the paper by lead author Han Cai, an EECS graduate student; Junyan Li, an undergraduate at Zhejiang University; Muyan Hu, an undergraduate student at Tsinghua University; and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Computer Vision.
A simplified solution
Categorizing every pixel in a high-resolution image that may have millions of pixels is a difficult task for a machine-learning model. A powerful new type of model, known as a vision transformer, has recently been used effectively.
Transformers were originally developed for natural language processing. In that context, they encode each word in a sentence as a token and then generate an attention map, which captures each token’s relationships with all other tokens. This attention map helps the model understand context when it makes predictions.
Using the same concept, a vision transformer chops an image into patches of pixels and encodes each small patch into a token before generating an attention map. In generating this attention map, the model uses a similarity function that directly learns the interaction between each pair of pixels. In this way, the model develops what is known as a global receptive field, which means it can access all the relevant parts of the image.
Since a high-resolution image may contain millions of pixels, chunked into thousands of patches, the attention map quickly becomes enormous. Because of this, the amount of computation grows quadratically as the resolution of the image increases.
In their new model series, called EfficientViT, the MIT researchers used a simpler mechanism to build the attention map — replacing the nonlinear similarity function with a linear similarity function. As such, they can rearrange the order of operations to reduce total calculations without changing functionality and losing the global receptive field. With their model, the amount of computation needed for a prediction grows linearly as the image resolution grows.
“But there is no free lunch. The linear attention only captures global context about the image, losing local information, which makes the accuracy worse,” Han says.
To compensate for that accuracy loss, the researchers included two extra components in their model, each of which adds only a small amount of computation.
One of those elements helps the model capture local feature interactions, mitigating the linear function’s weakness in local information extraction. The second, a module that enables multiscale learning, helps the model recognize both large and small objects.
“The most critical part here is that we need to carefully balance the performance and the efficiency,” Cai says.
They designed EfficientViT with a hardware-friendly architecture, so it could be easier to run on different types of devices, such as virtual reality headsets or the edge computers on autonomous vehicles. Their model could also be applied to other computer vision tasks, like image classification.
Keep reading.
Make sure to follow us on Tumblr!
2 notes · View notes