|
Post by swamprat on Dec 31, 2019 16:45:03 GMT
Physics in the 2020s: what will happen over the decade ahead 31 Dec 2019 | Matin Durrani
Matin Durrani looks back at the successes in physics over the last 10 years and gazes into his crystal ball at what the future holds.
(Courtesy: iStock/kckate16)
As the 2020s gets underway, what can we expect to happen over the next 10 years? To get you in the mood, let’s first look back over the decade that’s now fading into the past. We kicked off the 2010s with Barack Obama one year into his science-loving presidency, CERN physicists poring over the first 7TeV collisions from the Large Hadron Collider (LHC) and researchers wondering if the CDMS-II experiment in the US had really obtained the first direct evidence for dark matter (answer: probably not).
Many of the successes in physics over the last decade were honoured by Physics World through our annual Breakthrough of the Year. The award was a straightforward choice in some years, especially when there’d been a big breakthrough in particle physics or astronomy. In 2012 the prize went to CERN’s discovery of the Higgs boson, in 2013 to the IceCube detector in Antarctica spotting cosmic neutrinos, and in 2016 to LIGO’s momentous discovery of gravitational waves.
Quantum physics was a burgeoning area in the 2010s too, reflecting physicists’ growing ability to experimentally probe the deepest mysteries of the subject. In 2011 the award went to work on “weak measurement”, which broke the taboo that it’s impossible to gain knowledge of the paths taken by individual photons travelling through two slits to create an interference pattern. Four years later, the prize was awarded for “double” quantum teleportation, in which physicists simultaneously transferred both a photon’s spin and its orbital angular momentum to a distant photon.
One significant cultural change in physics during the 2010s was the growing realization that physicists needs to do much more to root out inequalities in the field and make it more diverse. In fact, the decade saw some high-profile dismissals and resignations in the physics community – on the grounds of unwanted harassment of women and other groups – that in the past would have been unheard of and most likely swept under the carpet. Much of those changes came to light due to the openness wrought by the digital age.
One great hope for the decade, however, went unfulfilled: the discovery of “new” physics beyond the Standard Model. Despite 10 years of the LHC, there are still no signs of supersymmetric particles, forcing particle theorists to make progress with their mathematical wits alone. But with the LHC about to embark on an ambitious upgrade programme under the stewardship of Fabiola Gianotti, who just became the first person to be awarded a full second term as CERN boss, particle physicists will surely hope they can achieve in the next decade the dreams they held at the start of this.
The future is bright
I’m not sure for how long my own optimism will last, and for sure there will be plenty of downs as well as ups over the next 10 years. But as cognitive psychologist Stephen Pinker argued in his 2018 book Enlightenment Now, the world is, overall, improving. Whether measured in terms of health, literacy, safety or prosperity, things are only getting better – and those advances are due, in no small part, to science.
Physicists often don’t get the credit, but their discoveries have transformed everyday life, not least in how we communicate. Powered by developments
in semiconductor physics and optics, I can see smartphones continuing to be ever lighter, faster and more powerful over the next decade (LiFi phones anyone?). Quantum computing and communication will become mainstream, with quantum computers routinely accessed via the cloud. Physics experiments will generate ever more data and analysing that information using artificial intelligence and machine learning will become “the new normal”.
I can see environmental concerns having a bigger influence on physics. Scientific lab equipment will become cleaner and greener. Particle-accelerator labs, which used to boast about how much electricity they consumed to perform collision experiments, will either have to be more coy or, better still, put energyefficient technologies centre stage. The rising impact of climate change will see air travel increasingly frowned upon, with jet-setting physicists having to fight harder to justify those conference flights. (And before you ask, yes we are looking at changing the plastic wrappers used to post print issues of Physics World.)
Medical physics will continue to boom, from improved radiotherapy treatments to new imaging techniques. In astronomy, NASA’s James Webb Space Telescope should finally launch (though I wouldn’t bet against there being yet another delay beyond 2021). The ITER fusion experiment in France should create its “first plasma” by mid-decade – roughly at the same time as the high-luminosity upgrade to CERN’s LHC comes online. China’s underground gravitational-wave detector could be ready then too, joining a similar facility in Japan searching for these ripples in space–time, which should open this year.
Diversity matters
But perhaps the biggest change in physics over the next 10 years will be in terms of diversity and equality. Efforts under way in recent times to make physics open for all will finally pay off and, though I’m not convinced that the overall numbers studying physics will increase by much (or even at all), I do predict that those at the top of the field will, by 2030, be far more varied in background than now.
One thing is for sure: Physics World will be around to give you the liveliest and most thought-provoking coverage of the world of physics over the coming decade. So stay tuned for the ride ahead.
Matin Durrani is managing editor of Physics World magazine.
physicsworld.com/a/physics-in-the-2020s-what-will-happen-over-the-decade-ahead/
|
|
|
Post by swamprat on Feb 11, 2020 16:56:37 GMT
If you understand this, you are smarter than I am.....Classical time crystals could exist in nature, say physicists 11 Feb 2020
Time after time: simulations suggest that classical discrete time crystals could exist in nature. (Courtesy: Shutterstock_Dmitriy-Rybin)
When time crystals were first proposed in 2012 by physicist Frank Wilczek they seemed like an exotic consequence of quantum mechanics in systems of many interacting particles. Wilczek argued that such systems broke symmetry in time, changing so as to return periodically to the same state just as ordinary crystals exhibit periodicity in space.
Subsequent experimental work has found that quantum time crystals can exist in systems maintained out of equilibrium by some driving force. Now Norman Yao of the University of California at Berkeley and colleagues suggest that time crystals can arise without the need for quantum physics at all. They argue that purely classical systems of oscillators such as coupled pendulums could have the same time-crystal order as their quantum counterparts. What is more, time crystals could be made experimentally, and might even exist in nature.
A non-equilibrium (or discrete) time crystal responds to a periodic driving force by showing some kind of oscillation in time with a period different to – generally some whole multiple of – the driver. In the 1830s Michael Faraday showed in theory that a class of periodically driven oscillators now known as parametric resonators can undergo “period-doubling”, meaning that they oscillate at half the driving frequency. This is precisely the kind of so-called subharmonic response that characterizes time crystals.
Sparking discussions
This long history has sparked discussions of whether purely classical systems might show time-crystal behavior. Last year, a team from the Swiss Federal Institute of Technology (ETH) in Zurich showed experimentally that two coupled oscillating strings displayed period-doubling. They pointed out that there is a close analogy between this behaviour and that seen in quantum many-body time crystals.
But a true time crystal needs something more, says Yao’s team. Discrete time crystals (DTCs) are “open” systems that are kept out of equilibrium by some energy input from the environment. In general, this input causes the system to slowly heat up. Over time its temperature would rise without limit, eventually “melting” the time crystal so the periodic order disappears.
In quantum DTCs this heating is prevented by “many-body localization” (MBL), whereby disorder in the arrangement of component parts inhibits energy exchange between their energy levels, preventing the spread and equilibration of heat.
Heat bath
There is no known classical analogue of MBL and so it was not clear if classical DTCs would be stable against heating. One way to avoid heating classically is by dissipation: coupling the system to a heat bath, for example via friction for a mechanically oscillating system.
This is what happens in Faraday’s parametric resonators and the ETH vibrating strings. But Yao and colleagues point out that this adds noise to the system, and a crucial question is whether the time-crystal oscillations can withstand it. This because true time crystals must also be stable against perturbations and noise in the driving – just as a space crystal is resilient to fluctuations.
The researchers have now identified a simple classical system that could have DTC behaviour in the presence of noise. It is a series of pendulums or oscillators, arranged in a row and connected to one another as if by springs.
Slightly nonlinear
The oscillators must be slightly nonlinear, which means that they do not undergo perfectly harmonic oscillation. Meanwhile, the dissipative coupling to the environment could be achieved by viscous friction
“We argue that classical DTCs could exist in principle even if there’s noise”, says Yao’s Berkeley colleague Michael Zaletel. “That hadn’t been shown before.”
The team show that this classical DTC will crystallize from a time-symmetric (non-periodic) state as the noise is reduced and the strength of the coupling between pendulums is increased, in an abrupt phase transition. It is closely analogous to the way a space crystal freezes from a liquid as it is cooled (lowering the noise) and/or the intermolecular forces get stronger. In computer simulations, the researchers see their period-doubled time crystal “nucleating” out of the time-symmetric state like a crystal growing from a seed.
This behaviour does not persist indefinitely, however. At any finite temperature, it will decay very slowly and eventually “melt”. There is some quantity (noise or temperature) that controls the lifetime of the TC. “We don’t have a true classical DTC”, Yao says. “Ours dies at very long times; we think”. But the system looks like one unless you watch it long enough. The researchers call such a system an “activated time crystal”.
The surface waves studied by Faraday and the mechanical model studied experimentally by the ETH team, they say, were probably like this – but because the influence of thermal noise is so tiny for macroscopic oscillators, the DTC would have decayed only extremely slowly, making it last much longer than the experimental timescale.
Not a new phase of matter
Because this new time crystal is not infinitely long-lived, says Vedika Khemani at Stanford University in California, it cannot be considered a new phase of matter. “As far as we know, MBL quantum systems are still the only examples of many-body time-crystals”, she says.
Yao and colleagues argue, however, that an indefinitely persistent classical DTC might be possible if the oscillators or pendulums are coupled together in a more complicated way. That suggestion stems from their analysis of a different kind of system called a cellular automaton, made from many identical components whose states depend on those of their neighbours. Yao and Zaletel admit they have no rigorous proof of this yet – and that making a mechanical system governed by such rules could be “insanely complicated.”
Yao and colleagues believe the system they have simulated might occur in the real systems such as coupled oscillating Josephson junctions, or quasi-classical excitations of electrons called charge-density waves. “Experiments on charge-density waves were done in the 1980s that showed what looks to the eye like period doubling”, says Zaletel. “It would be very interesting to go back to these experiments and check.”
The researchers speculate that systems like theirs in which time-crystal oscillations exist for long if not infinite times might even be found in living systems such as colonies of interacting cells. Such periodicity at a subharmonic frequency determined by the internal dynamics of the system might be useful in biology, they say – and their relatively simple prescription for an activated DTC could be the preferred one. “It’s definitely useful to get oscillations in biology”, says Zaletel, “and it’s usually enough to have them for finite but long times.”
The research is described in Nature Physics.
physicsworld.com/a/classical-time-crystals-could-exist-in-nature-say-physicists/
|
|
|
Post by swamprat on Feb 14, 2020 1:06:35 GMT
Let's take a look back in time:National Maglab Electricity and Magnetism Timeline Our timeline guides you through the highlights of electricity and magnetism across the globe and across the centuries.
600 BC - 1599
Humans discover the magnetic lodestone as well as the attracting properties of amber. Advanced societies, in particular the Chinese and the Europeans, exploit the properties of magnets in compasses, a tool that makes possible exploration of the seas, “new worlds” and the nature of Earth’s magnetic poles.
1600 - 1699
The Scientific Revolution takes hold, facilitating the groundbreaking work of luminaries such as William Gilbert, who took the first truly scientific approach to the study of magnetism and electricity and wrote extensively of his findings.
1700 - 1749
Aided by tools such as static electricity machines and leyden jars, scientists continue their experiments into the fundamentals of magnetism and electricity.
1750 - 1774
With his famous kite experiment and other forays into science, Benjamin Franklin advances knowledge of electricity, inspiring his English friend Joseph Priestley to do the same.
1775 - 1799
Scientists take important steps toward a fuller understanding of electricity, as well as some fruitful missteps, including an elaborate but incorrect theory on animal magnetism that sets the stage for a groundbreaking invention.
1800 - 1819
Alessandro Volta invents the first primitive battery, discovering that electricity can be generated through chemical processes; scientists quickly seize on the new tool to invent electric lighting. Meanwhile, a profound insight into the relationship between electricity and magnetism goes largely unnoticed.
1820 - 1829
Hans Christian Ørsted’s accidental discovery that an electrical current moves a compass needle rocks the scientific world; a spate of experiments follows, immediately leading to the first electromagnet and electric motor.
1830 - 1839
The first telegraphs are constructed and Michael Faraday produces much of his brilliant and enduring research into electricity and magnetism, inventing the first primitive transformer and generator.
1840 - 1849
The legendary Faraday forges on with his prolific research and the telegraph reaches a milestone when a message is sent between Washington, DC, and Baltimore, MD.
1850-1869
The Industrial Revolution is in full force, Gramme invents his dynamo and James Clerk Maxwell formulates his series of equations on electrodynamics.
1870-1879
The telephone and first practical incandescent light bulb are invented while the word "electron" enters the scientific lexicon.
1880-1889
Nikola Tesla and Thomas Edison duke it out over the best way to transmit electricity and Heinrich Hertz is the first person (unbeknownst to him) to broadcast and receive radio waves.
1890-1899
Scientists discover and probe x-rays and radioactivity, while inventors compete to build the first radio.
1900-1909
Albert Einstein publishes his special theory of relativity and his theory on the quantum nature of light, which he identified as both a particle and a wave. With ever new appliances, electricity begins to transform everyday life.
1910-1929
Scientists' understanding of the structure of the atom and of its component particles grows, the phone and radio become common, and the modern television is born.
1930-1939
New tools such as special microscopes and the cyclotron take research to higher levels, while average citizens enjoy novel amenities such as the FM radio.
1940-1959
Defense-related research leads to the computer, the world enters the atomic age and TV conquers America.
1960-1979
Computers evolve into PCs, researchers discover one new subatomic particle after another and the space age gives our psyches and science a new context.
1980-2003
Scientists explore new energy sources, the World Wide Web spins a vast network and nanotechnology is born.
If you go to this URL, you can click on each time period and read more:
nationalmaglab.org/education/magnet-academy/history-of-electricity-magnetism/timeline?fbclid=IwAR2Qwa93SLEfVxocC3HGt3IGI5guScpS8dNIY-mzecZ8klIVe2YE9Ux4D2Q
Now let's think about what may happen in the NEXT 2000 years!
|
|
|
Post by swamprat on Feb 14, 2020 20:39:56 GMT
US Navy and Boeing use manned jet to control drone Growlers By: David B. Larter | February 4, 2020
3.2K93
A U.S. Navy EA-18G Growler from Naval Air Station Whidbey Island takes off from Joint Base Elmendorf-Richardson, Alaska. Boeing announced a breakthrough test with unmanned Growlers on Feb. 4, 2019. (Staff Sgt. James Richardson/U.S. Air Force)
WASHINGTON — The U.S. Navy and Boeing demonstrated the ability to control unmanned aircraft with a manned jet, a capability that is critical for concepts intended to keep naval aviation relevant into the 21st century.
The Navy’s test wing out of Naval Air Station Patuxent River, Maryland, flew two unmanned E/A-18G Growlers, with a third manned fighter acting as mission control for the drones, according to a Feb. 4 news release from Boeing.
The test “proved the effectiveness of technology allowing F/A-18 Super Hornets and EA-18G Growlers to perform combat missions with unmanned systems,” the release said.
“This demonstration allows Boeing and the Navy the opportunity to analyze the data collected and decide where to make investments in future technologies,” Tom Brandt, Boeing’s manned-unmanned teaming demonstration lead, said in the release. “It could provide synergy with other U.S. Navy unmanned systems in development across the spectrum and in other services.”
The test was conducted under the aegis of Navy Warfare Development Command as part of its fleet experiment exercise, the release said.
The Navy will increasingly rely on networked weapons and drones commanded by manned aircraft operating forward as part of an effort to extend the service’s fighting range and sharpen the teeth of its air wing. It’s a concept of operations that was detailed in a recent study by the Center for Strategic and Budgetary Assessments.
In the study, senior fellow Bryan Clark called for an unmanned combat air vehicle, or UCAV, with a range of up to 3,000 nautical miles without refueling and the ability to perform missions from anti-submarine and electronic warfare to anti-surface and strike.
But the study also called for retaining a manned fighter for command-and-control capabilities in environments where communications are jammed or nonexistent.
“There is still going to be a need for manned fighters to do close-air support, but mostly to do command and control of other platforms that are perhaps unmanned inside a comms-denied environment,” Clark said. “So if you send some loitering missiles or you send UCAVs up forward, you would expect them to be managed by someone who is able to maintain comms with them. That would be a human in a fighter that is able to remain close enough to them to stay in comms.”
For that, Clark points to a retooled F-35 fighter jet, one that switches out internal payload space for fuel.
“The F-35 folks, when you talk to them about what it would take to make it a longer-range command-and-control aircraft, they’re pretty optimistic because most of the challenge in doing these kinds of changes is in the software,” Clark said. “And the software isn’t dramatically different because it’s really just changing how it manages the fuel, not any of the other functions.”
The experiment seems to indicate that it isn’t just the F-35′s fancy communications suite that is up the task. The test demonstrates the ability to increase the pilot’s situational awareness with multiple aircraft, said Brandt.
“This technology allows the Navy to extend the reach of sensors while keeping manned aircraft out of harm’s way,” Brandt said. “It’s a force multiplier that enables a single aircrew to control multiple aircraft without greatly increasing workload. It has the potential to increase survivability as well as situational awareness.”
www.c4isrnet.com/naval/2020/02/04/us-navy-and-boeing-demonstrate-controlling-unmanned-aircraft-with-a-manned-jet/
|
|
|
Post by swamprat on Feb 20, 2020 15:37:17 GMT
A broader range of experiments 20 Feb 2020 Margaret Harris
Anatole von Lilienfeld is a professor of physical chemistry at the University of Basel, Switzerland, and a project leader in the Swiss National Center for Computational Design and Discovery of Novel Materials (MARVEL). He is also the editor-in-chief of a new journal called Machine Learning: Science and Technology, which (like Physics World) is published by IOP Publishing. He spoke to Margaret Harris about the role that machine learning plays in science, the purpose of the journal and how he thinks the field will develop.
Conceptual advance: Anatole von Lilienfeld sees machine learning as a powerful tool for discoveries in science.
The term “machine learning” means different things to different people. What’s your definition?
It’s a term used by many communities, but in the context of physics I would stick to a rather technical definition. Machine learning can be roughly divided into two different domains, depending on the problems one wants to attack. One domain is called unsupervised learning, which is basically about categorizing data. This task can be nontrivial when you’re dealing with high-dimensional, heterogeneous data of varying fidelity. What unsupervised learning algorithms do is try to determine whether these data can be grouped into different clusters in a systematic way, without any bias or heuristic, and without introducing spurious artefacts.
Problems of this type are ubiquitous: all quantitative sciences encounter them in one way or another. But one example involves proteins, which fold in certain shapes that depend on their amino acid sequences. When you measure the X-ray spectra of protein crystals, you find something interesting: the number of possible folds is large, but finite. So if somebody gave you some sequences and their corresponding folds, a good unsupervised learning algorithm might be able to cluster new sequences to help you determine which of them are associated with which folds.
The second branch of machine learning is called supervised learning. In this case, rather than merely categorizing the data, the algorithms also try to predict values outside the dataset. An example from materials science would be that if you have a bunch of materials for which a property has been measured – the formation energy of some inorganic crystals, say – you can then ask, “I wonder what the formation energy would be of a new crystal?” Supervised learning can give you the statistically most likely estimate, based on the known properties of the existing materials in the dataset.
These are the two main branches of machine learning, and the thing they have in common is a need for data. There’s no machine learning without data. It’s a statistical approach, and this is sort of implied when you’re talking about machine learning: these techniques are mathematically rigorous ways to arrive at statistical statements in a quantitative manner.
You’ve given a couple of examples of machine-learning applications within materials science. I know this is the subject closest to your heart, but the new journal you’re working on covers the whole of science. What are some applications in other fields?
Of course, I’m biased towards materials science, but other domains face similar problems. Here’s an example. One of the most important equations in materials science is the electronic Schrödinger equation. This differential equation is difficult to solve even with computers, but machine learning enables us to circumvent the need to solve it for new materials. Similarly, many scientific domains require solutions to the Navier-Stokes equations in various approximations. These equations can describe turbulent flow, which matters for combustion, for climate modelling, for engineering aerodynamics or for ship construction (among other areas). These equations are also hard to solve numerically, so this is a place where machine learning can be applied.
"We have a unique opportunity to give people from all these different domains a place to discuss developments of machine learning in their fields." Anatole von Lilienfeld
Another area of interest is medical imaging. The scanning techniques used to detect tumours and malignant tissues are good applications of unsupervised learning – you want to cluster healthy tissue versus unhealthy tissue. But if you think about it, there is hardly any quantitative domain within the physical sciences where machine learning cannot be applied.
With this journal, we have a unique opportunity to give people from all these different domains a place to discuss developments of machine learning in their fields. So if there’s a major advancement in image recognition of, say, lung tumours, maybe materials scientists will learn something from it that will help them interpret X-ray spectra, or vice versa. Traditionally, people would publish such work within their own disciplines, so it would be hidden from everyone else.
You talked about machine learning as an alternative to computation for finding solutions to equations. In your editorial for the first issue of Machine Learning: Science and Technology, you say that machine learning is emerging as a fourth pillar of science, alongside experimentation, theory and computation. How do you see these approaches fitting together?
Humans began doing experimentation very early. You could view the first tools as being the result of experiments. Theory developed later. Some would say the Greeks started it, but other cultures also developed theories; the Maya, for example, had theories of stellar movement and calendars. All this work culminated in the modern theories of physics, to which many brilliant scientists contributed.
But that wasn’t the end, because many of these brilliant theories had equations that could not be solved using pen and paper. There’s a famous quote from the physicist Paul Dirac where he says that all the equations predicting the behaviour of electrons and nuclei are known. The trouble was that no human could solve those equations. However, with some reasonable approximations, computers could. Because of this, simulation has gained tremendous traction over the last decades, and of course it helps that Moore’s Law has meant that you can buy an exponentially increasing amount of computing power for a constant number of dollars.
I think the next step is to use machine learning to build on theory, experiment and computation, and thus to make even better predictions about the systems we study. When you use computation to find numerical solutions to equations, you need a big computer. However, the outcome of that big computation can then feed into a dataset and be used for machine learning, and you can feed in experimental data alongside it.
Over the next few years, I think we’ll start to see datasets that combine experimental results with simulation results obtained at different levels of accuracy. Some of these datasets may be incredibly heterogeneous, with a lot of “holes” for unknown quantities and different uncertainties. Machine learning offers a way to integrate that knowledge, and to build a unifying model that enables us to identify areas where the holes are the largest or the uncertainties are the greatest. These areas could then be studied in more detail by experiments or by additional simulations.
What other developments should we expect to see in machine learning?
I think we’ll see a feedback loop develop, similar to the one we have now between experiment and theory. As experiments progress, they create an incentive for proposing hypotheses, and then you use that theory to make a prediction that you can verify experimentally. Historically, some experiments were excluded from that because the equations were too difficult to solve. But then computation arrived, and suddenly the scope of experimental design widened tremendously.
I think the same thing is going to happen with machine learning. We’re already seeing it in materials science, where – with the help of supervised learning – we’ve made predictions within milliseconds about how a new material will behave, whereas previously it would have taken hours to simulate on a supercomputer. I believe that will soon be true for all the physical sciences. I’m not saying we will be able to perform all possible experiments, but we’ll be able to design a much broader range of experiments than we could previously.
physicsworld.com/a/a-broader-range-of-experiments/
|
|
|
Post by HAL on Feb 20, 2020 19:45:13 GMT
It is vitally important that the people writing programs always differentiate between 'what is' and 'what may be'. And that the 'what may be' isn't allowed to become more important than the 'what is'.
We do seem to see a lot of speculation being pushed as probability when it comes to future things like space exploration.
We do need to remember that, so far, we are incapable of sending a human being more than a quarter of a Million miles from Earth.
HAL
|
|
|
Post by swamprat on Feb 24, 2020 2:05:10 GMT
Scientists Successfully Grow a Full-Sized Beating Heart Using Stem Cells June 26, 2019
This article is shared with permission from our friends at IFL Science.
Right now, 4,186 people are waiting for a heart transplant in the U.S., but with a huge donor shortage not all of these patients are likely to survive. Growing transplantable hearts in a laboratory has been a long-standing dream of the medical community, and a study in the journal Circulation Research has moved it one step closer to reality: A team of researchers has successfully grown a beating human heart in the laboratory using stem cells.
Previous research has shown how 3D printers can be used to manufacture 3D heart segments using the biological material. Although vacant of any actual heart cells, these structures provided the “scaffold” on which heart tissue could be grown.
Now, a team from both Massachusetts General Hospital (MGH) and Harvard Medical School has taken this scaffolding concept and combined it with stem cells for some truly spectacular results.
The main problem with heart transplants, other than a lack of donors, is that there’s a chance that the receiver’s body will reject the new organ. Their immune system will often register the foreign tissue as a threat. After that, it will proceed to attack and destroy it. The only way to stop this from happening are drugs that suppress the immune system, and this is only successful in some cases.
The Study
For this study, 73 human hearts deemed unsuitable for transplantation were carefully immersed in solutions of detergent to strip them of any cells that would provoke this self-destructive response. What was left was a matrix (or “scaffold”) of a heart, complete with its intricate structures and vessels, providing a new foundation for new heart cells to be grown onto.
This is where pluripotent stem cells come in. These “primitive” stem cells have the ability to become almost any type of cell in the body, including bone, nerve, and even muscle – including those found in the heart.
For this research, human skin cells were reprogrammed into becoming pluripotent stem cells. They were then induced into becoming two types of heart cells, which were shown to readily develop and grow on the lab scaffold when bathed in a nutrient solution.
Roughly 610,000 people die from heart disease in the U.S. every year. Could this revolutionary technique one day save many of those lost to this killer?
After just two weeks, the networks of lab-grown heart cells already resembled immature but intricately structured hearts. The team gave them a burst of electricity, and the hearts actually started beating.
Significantly, any heart cells grown in this way would be recognized by the patient’s immune system as “friendly,” as long as the original skin cells were sourced from their own body in the first place. This means that these lab-grown hearts would not be rejected and, of course, there’s no donor to wait for.
“Among the next steps that we are pursuing are improving methods to generate even more cardiac cells,” said Jacques Guyette, a biomedical researcher at the MGH Center for Regenerative Medicine and lead author of the study, in a statement.
Although this study manufactured a whopping 500 million stem cell-derived heart cells for the procedure, regrowing a whole heart would actually take “tens of billions,” Guyette added.
So despite falling short of growing an entire, mature human heart in a laboratory from a patient’s own cells, this is the closest anyone has come to date to reaching this goal – and that in itself is a breathtaking achievement.
theheartysoul.com/human-hearts-grown-laboratory/?utm_source=JERF&fbclid=IwAR00uEtCQLTvsKyhYKFGV4G_Dc_o5syWsoR0eIbicisiW3uUXESNWdIWDtc
|
|
|
Post by SysConfig on Feb 25, 2020 5:11:39 GMT
Did ISS Live-Feed Accidentally Capture "Top Secret" Hypersonic Vehicle Test? Profile picture for user Tyler Durden by Tyler Durden Mon, 02/24/2020 - 21:50 Scott C. Waring, the founder of UFO Sightings Daily, claims he has come across a strange video recorded from the International Space Station's (ISS) NASA Space Cam that shows the moment an unidentified flying object rockets into orbit. "I was watching the NASA live space station cam when I noticed the camera zooming in on a strange object coming from below the space station. At first I thought it was a capsule or satellite, but its speed increased and after 22 minutes it shot up and into deep space. I believed if it was a capsule it would have gone into low earth orbit then lower to land. But when this object shot upward into deep space, it literally blew my mind. This could be USAF top-secret alien tech fused craft, but I don't think so, the person on the camera seemed dismayed and unprepared for its sudden appearance," Waring said in a blog post. Waring operates the YouTube channel ET Data Base and judging by his past videos and commentaries - he's a tinfoil hat conspiracy theorist. He told tabloid newspaper Daily Express, that at one point, the NASA live camera suddenly "notices something down there and begins to zoom in on it."
|
|
|
Post by swamprat on Feb 25, 2020 16:42:10 GMT
Study closely..... There'll be a quiz later..... Bernie says the ions should be freed..... Quantum computing from the ground up 25 Feb 2020 Margaret Harris
Taken from the February 2020 issue of Physics World.
Trapped-ion computing pioneer Chris Monroe describes how decades of experience in academic and government research led him to start his own quantum computing firm.
Start-up: Chris Monroe co-founded a company that is developing quantum computers using trapped ions. (Courtesy: Chris Monroe, IonQ)
Chris Monroe is a physicist at the University of Maryland, US, and the co-founder and chief scientist of IonQ, a start-up that is developing quantum computers using trapped ions as qubits. He recently spoke to Margaret Harris about the rise of quantum computing, and how his previous experiences – including stints in the labs of two physics Nobel laureates – fed into his decision to start the company in 2015.
How did you get interested in quantum computing? Because you really got into the field at the very beginning…
Yes, I’ve been in this field for more than 25 years, and I have to say it sort of landed in my lap. I did my PhD work on cold atomic gases at the University of Colorado, Boulder, in the group of Carl Wieman and Eric Cornell, who went on to share the physics Nobel prize in 2001 for making the first Bose–Einstein condensate. But I always knew that at some point I might want to get a “real job”, and atomic physics is good in that respect because it involves working with practical things like optics and lasers and photonics. There’s a lot of equipment involved, and I was attracted to the technical nature of the work.
After I got my PhD, I went on to do a postdoc. The postdoc system is a little stressful, because it’s a temporary job, and you’re in your late 20s, and everyone else you know is building their career. But postdocs are also wonderful opportunities to try something random, because there’s very little at stake if it doesn’t work out. And in my case, I didn’t have to move very far. I stayed in Boulder and went down the road to work with David Wineland at the National Institute of Standards and Technology (NIST).
In the early- to mid-1990s, Wineland’s lab was basically the atomic-clock division of the US government. He is an amazing researcher, and NIST allowed him to do academic-type research within this government lab. So instead of building the clocks that people use as a real time standard, we were doing research on how you might make better clocks. One of the crazy ideas we had was that by entangling multiple atoms or ions, we could make our clock run faster (and therefore more accurately), so we came up with a scheme to entangle two ions.
As it turns out, that meant we were building a quantum gate for a tiny quantum computer. But we didn’t know those terms at the time. I didn’t hear about quantum computing until the summer of 1994, when I learned of Peter Shor’s algorithm for factoring large numbers using a quantum computer.
When Wineland and I saw Shor’s article, it entirely changed our direction of research. We were still at NIST doing atomic clocks, but now we were also doing quantum computing, and government agencies got very interested in seeing what we needed to do to scale it up. Everything we did in that laboratory was ground-breaking, and Wineland went on to win the Nobel prize in 2012 largely based on his work in the 1990s. It was a pretty cool beginning to my career.
Several years passed from when you first heard about quantum computing to when you set up IonQ. What made you decide “This isn’t just a research topic anymore, I’m going to start a company”?
For the first 10 years or so, there was a lot of research to do. Picking out the best type of quantum gate. Deciding which atomic species to use. Working out how well the lasers perform and how big our quantum systems could be before they got killed by noise. It took a long time, not just for me, but for the whole community to do those experiments. And in terms of scaling things up from a handful of qubits or gates, really nothing happened for a long time apart from high-level proposals for scaling. Although we were starting to understand the limitations of the physics, we weren’t ready to do the engineering.
Beginning in 2010, though, we started to narrow things down and make decisions, and by 2014 or 2015 we had our first tiny quantum computer. And that was interesting, because after we initialized and calibrated the system in the morning, we stopped doing atomic physics in the afternoon. Once the system was seeded, we could stop tinkering with the lasers, go over to the PC that was controlling the experiment and run algorithms.
After that, a couple of things happened. I had a long-standing collaboration with a colleague from Duke University, Jungsang Kim. He’s an engineer, and we recognized that we kind of filled each other’s gaps. I’m a physicist, and I’d been in this field for a long time, but he has great experience in what’s called systems engineering, and he thinks differently about physical systems than I do. We realized that, together, we could do some amazing things.
Around that same time, in mid-2016, IBM built a five-qubit superconducting quantum computer and put it in the cloud so that people could use it. At first, that seemed a little goofy to us, because five qubits is really small – we’re not going to learn anything from that. But it was more than just a publicity stunt. I mean, it was a publicity stunt, but it also allowed anybody to use the system, which was huge – a genius move.
As it happened, the system we were building was also exactly five qubits, but in terms of performance it was much better than IBM’s. This is because atomic qubits are nearly perfect and exactly replicable; because we could connect a pair of the atom qubits with reconfigurable laser beams; and because we could run “deeper” circuits. So people started approaching us, saying that they’d tried to use the IBM cloud, but it didn’t work for what they wanted to do – could we help? Initially, it was more like a scientific collaboration: we started running applications and algorithms that other people would send us. But we realized that to go to the next step required such a serious dose of engineering that it probably couldn’t be done at a university. And that was the genesis of IonQ.
What do you know now that you wish you’d known when you started IonQ?
One thing I’ve learned has to do with the computer science aspect of our systems. Moving the operations around in our algorithms so they’re mapped to our system in an optimal way turns out to be incredibly powerful – much more powerful than I imagined. If I’d known that two or three years ago, I would have hired more computer-science theorists.
As an analogy, the first PC I ever used had four kilobytes of memory. Now we have hundreds of gigabytes, and that means we waste it – we take pictures that are way too high resolution and store them on our hard drive because memory is a commodity. It’s cheap, it’s easy. There’s no reason not to waste it. But at the early stages of any technology, including quantum computing, you have to squeeze out every ounce of efficiency you can, because it might mean the difference between running an application and not being able to. In 10 or 20 years, I hope that qubits and gates will be more of a commodity, and then we can be more wasteful with them. But to get there we have to extract as much efficiency as possible. That’s not really physics – it’s quantum computer science, and it’s a very rare skillset right now.
That leads nicely to my last question. Do you have any advice for today’s physics students?
With a physics degree, you can pretty much do anything. The challenge is that the doors are not open for you to the same extent as they are in some other fields. When you study engineering, for example, it’s almost like going to business school. You make connections, there are job fairs and the doors open for you to go work for these big engineering firms. You can still do that as a physicist. It just won’t come to you. You have to go find it.
So the advice I would give is to keep your options open. If you do a PhD, it may appear like you’re narrowing your options, because you’re working for several years on just one thing. But if you can solve a problem at the forefront of your field, even if it’s a narrow problem, you learn how to do that in any field. Going in-depth in physics will help you no matter what you want to do, even if it’s something unrelated, such as finance.
We all learn quantum physics as physics students, but in recent years this field has taken on a whole new life. It’s not an esoteric theory anymore, something that only describes tiny effects in extreme forms of matter. It’s going to form the basis for a whole new type of technology. So I think that, because physicists have a bit of a leg up in this area, they should go all-in.
physicsworld.com/a/quantum-computing-from-the-ground-up/
|
|
|
Post by swamprat on Mar 10, 2020 15:25:10 GMT
How a magnet could help boost understanding of superconductivity
Physicists unravel a mystery behind the strange behavior of electrons in a ferromagnet
Date: March 4, 2020
Source: Rutgers University
Summary:
Physicists have unraveled a mystery behind the strange behavior of electrons in a ferromagnet, a finding that could eventually help develop high temperature superconductivity.
Physicists have unraveled a mystery behind the strange behavior of electrons in a ferromagnet, a finding that could eventually help develop high temperature superconductivity.
A Rutgers co-authored study of the unusual ferromagnetic material appears in the journal Nature.
The Rutgers Center for Materials Theory, a world leader in the field, studies "quantum phase transitions." Phase transitions, such as when ice melts, usually require heat to jiggle atoms and melt ice crystals. Quantum phase transitions are driven by the jiggling of atoms and electrons that result from fluctuations that never cease even at low temperatures.
A quantum phase transition can be achieved by tuning a material to enhance quantum fluctuations, either by applying a magnetic field or exposing it to intense pressure when the temperature is near absolute zero. In certain quantum phase transitions, the quantum fluctuations become infinitely intense, forming a "quantum critical point." These unusual states of matter are of great interest because of their propensity to form superconductors. Think of it as like an electronic stem cell, a form of matter that can transform itself in many ways.
Meanwhile, in the weird world of quantum mechanics, "entanglement" allows something to be in two different states or places at the same time. The Austrian physicist Erwin Schrödinger's famous thought experiment, which features a cat that is simultaneously dead and alive, is an example of entanglement.
Inside materials with electrons moving through them, entanglement often involves the spin of electrons, which can be simultaneously up and down. Typically, only electrons near each other are entangled in quantum materials, but at a quantum critical point, the entanglement patterns can change abruptly, spreading out across the material and transforming it. Electrons, even distant ones, become entangled.
Ferromagnets are an unlikely setting for studying quantum entanglement because the electrons moving through them align in one direction instead of spinning up and down. But physicists found that the ferromagnetism in "Cerge," (CeRh6Ge4) a ferromagnet, must have a large amount of entanglement with electrons that spin up and down and are connected with each other. That had never been seen in ferromagnets.
"We believe our work, connecting entanglement with the strange metal and ferromagnets, provides important clues for our efforts to understand superconductors that work at room temperature," said co-author Piers Coleman, a professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick. "As we learn to understand how nature controls entanglement in matter, we hope we'll develop the skills to control quantum entanglement inside quantum computers and to design and develop new kinds of quantum matter useful for technology."
Rutgers scientists have used some of their findings to propose a new theory for a family of iron-based superconductors that were discovered about 10 years ago. "If we are right, these systems, like ferromagnets, are driven by forces that like to align electrons," Coleman said.
Yashar Komijani, a Rutgers post-doctoral associate, is one of three co-lead authors. Scientists at Zhejiang University in China, Max Planck Institute for Chemical Physics of Solids in Germany and Nanjing University in China contributed to the study.
Story Source:
Materials provided by Rutgers University. Note: Content may be edited for style and length.
Journal Reference:
1. Bin Shen, Yongjun Zhang, Yashar Komijani, Michael Nicklas, Robert Borth, An Wang, Ye Chen, Zhiyong Nie, Rui Li, Xin Lu, Hanoh Lee, Michael Smidman, Frank Steglich, Piers Coleman, Huiqiu Yuan. Strange-metal behaviour in a pure ferromagnetic Kondo lattice. Nature, 2020; 579 (7797): 51 DOI: 10.1038/s41586-020-2052-z
|
|
|
Post by swamprat on Mar 10, 2020 15:30:14 GMT
'It's like you have a hand again': An ultra-precise mind-controlled prosthetic Date: March 4, 2020
Source: University of Michigan
Summary:
In a major advance in mind-controlled prosthetics for amputees, researchers have tapped faint, latent signals from arm nerves and amplified them to enable real-time, intuitive, finger-level control of a robotic hand.
In a major advance in mind-controlled prosthetics for amputees, University of Michigan researchers have tapped faint, latent signals from arm nerves and amplified them to enable real-time, intuitive, finger-level control of a robotic hand.
To achieve this, the researchers developed a way to tame temperamental nerve endings, separate thick nerve bundles into smaller fibers that enable more precise control, and amplify the signals coming through those nerves. The approach involves tiny muscle grafts and machine learning algorithms borrowed from the brain-machine interface field.
"This is the biggest advance in motor control for people with amputations in many years," said Paul Cederna, who is the Robert Oneal Collegiate Professor of Plastic Surgery at the U-M Medical School, as well as a professor of biomedical engineering.
"We have developed a technique to provide individual finger control of prosthetic devices using the nerves in a patient's residual limb. With it, we have been able to provide some of the most advanced prosthetic control that the world has seen."
Cederna co-leads the research with Cindy Chestek, associate professor of biomedical engineering at the U-M College of Engineering. In a paper published March 4 in Science Translational Medicine, they describe results with four study participants using the Mobius Bionics LUKE arm.
Intuitive prosthetic control works on the first try
"You can make a prosthetic hand do a lot of things, but that doesn't mean that the person is intuitively controlling it. The difference is when it works on the first try just by thinking about it, and that's what our approach offers," Chestek said. "This worked the very first time we tried it. There's no learning for the participants. All of the learning happens in our algorithms. That's different from other approaches."
While study participants aren't yet allowed to take the arm home, in the lab, they were able to pick up blocks with a pincer grasp; move their thumb in a continuous motion, rather than have to choose from two positions; lift spherically shaped objects; and even play in a version of Rock, Paper, Scissors called Rock, Paper, Pliers.
"It's like you have a hand again," said study participant Joe Hamilton, who lost his arm in a fireworks accident in 2013. "You can pretty much do anything you can do with a real hand with that hand. It brings you back to a sense of normalcy."
Turning a tiny muscle graft into a nerve signal amplifier
One of the biggest hurdles in mind-controlled prosthetics is tapping into a strong and stable nerve signal to feed the bionic limb. Some research groups -- those working in the brain-machine interface field -- go all the way to the primary source, the brain. This is necessary when working with people who are paralyzed. But it's invasive and high-risk.
For people with amputations, peripheral nerves -- the network that fans out from the brain and spinal cord -- have been interesting, but they hadn't yet led to a long-term solution for a couple of reasons: The nerve signals they carry are small. And other approaches to picking up those signals involved probes that eavesdropped by force. These "nails in nerves," as researchers sometimes refer to them, lead to scar tissue, which muddles that already faint signal over time.
The U-M team came up with a better way. They wrapped tiny muscle grafts around the nerve endings in the participants' arms. These "regenerative peripheral nerve interfaces," or RPNIs, offer severed nerves new tissue to latch on to. This prevents the growth of nerve masses called neuromas that lead to phantom limb pain. And it gives the nerves a megaphone. The muscle grafts amplify the nerve signals. Two patients had electrodes implanted in their muscle grafts, and the electrodes were able to record these nerve signals and pass them on to a prosthetic hand in real time.
"To my knowledge, we've seen the largest voltage recorded from a nerve compared to all previous results," Chestek said. "In previous approaches, you might get 5 microvolts or 50 microvolts -- very very small signals. We've seen the first ever millivolt signals.
"So now we can access the signals associated with individual thumb movement, multidegree of freedom thumb movement, individual fingers. This opens up a whole new world for people who are upper limb prosthesis users."
And their interface has already lasted years. Others degrade within months due to scar tissue.
The future of prosthetics research and industry
The findings also open up new possibilities for the field, said Chestek, whose expertise is on real-time machine learning algorithms to translate neural signals into movement intent.
"What we found is now the nerve signals are good enough to apply the whole world of things we learned in brain control algorithms to nerve control," she said.
The approach generates signals for finer movements than what today's prosthetic hands are capable of.
"Other research groups have contributed to this as well, but we've leapfrogged the capabilities of the prosthetic hands that are currently available. I think this is strong motivation for further developments from prosthetic hand companies," said Philip Vu, a research fellow in biomedical engineering and first author of the paper.
A clinical trial is ongoing. The team is looking for participants.
"So many times, the things we do in a research lab add to the knowledge in the field, but you never actually get a chance to see how that impacts a person," Cederna said. "When you can sit and watch one person with a prosthetic device do something that was unthinkable 10 years ago, it is so gratifying. I'm so happy for our participants, and even more happy for all the people in the future that this will help."
Added Chestek, "It's going to be a ways from here, but we're not going to stop working on this until we can completely restore able-bodied hand movements. That's the dream of neuroprosthetics."
The paper is titled, "A regenerative peripheral nerve interface allows real-time control of an artificial hand in upper limb amputees." The research is funded by DARPA and the National Institutes of Health.
Story Source:
Materials provided by University of Michigan. Note: Content may be edited for style and length.
Journal Reference:
1. Philip P. Vu, Alex K. Vaskov, Zachary T. Irwin, Phillip T. Henning, Daniel R. Lueders, Ann T. Laidlaw, Alicia J. Davis, Chrono S. Nu, Deanna H. Gates, R. Brent Gillespie, Stephen W. P. Kemp, Theodore A. Kung, Cynthia A. Chestek, Paul S. Cederna. A regenerative peripheral nerve interface allows real-time control of an artificial hand in upper limb amputees. Science Translational Medicine, 2020; 12 (533): eaay2857 DOI: 10.1126/scitranslmed.aay2857
www.sciencedaily.com/releases/2020/03/200304141641.htm
|
|
|
Post by swamprat on Mar 22, 2020 14:39:26 GMT
And,,,,similarly.....
In a few years, you won't have to punch data into your computer. You can just THINK it! New brain reading technology could help the development of brainwave-controlled devices Date: March 20, 2020
Source: The Francis Crick Institute
Summary:
A new method to accurately record brain activity at scale has been developed. The technique could lead to new medical devices to help amputees, people with paralysis or people with neurological conditions such as motor neuron disease.
A new method to accurately record brain activity at scale has been developed by researchers at the Crick, Stanford University and UCL. The technique could lead to new medical devices to help amputees, people with paralysis or people with neurological conditions such as motor neuron disease.
The research in mice, published in Science Advances, developed an accurate and scalable method to record brain activity across large areas, including on the surface and in deeper regions simultaneously.
Using the latest in electronics and engineering techniques, the new device combines silicon chip technology with super-slim microwires, up to 15-times thinner than a human hair. The wires are so thin they can be placed deep in the brain without causing significant damage. Alongside its ability to accurately monitor brain activity, the device could also be used to inject electrical signals into precise areas of the brain.
"This technology provides the basis for lots of exciting future developments beyond neuroscience research. It could lead to tech that can pass a signal from the brain to a machine, for example helping those with amputations to control a prosthetic limb to shake a hand or stand up. It could also be used to create electrical signals in the brain when neurons are damaged and aren't firing themselves, such as in motor neuron disease," says Andreas Schaefer, group leader in the neurophysiology of behaviour laboratory at the Crick and professor of neuroscience at UCL.
When the device is connected to a brain, electrical signals from active neurons travel up the nearby microwires to a silicon chip, where the data is processed and analysed showing which areas of the brain are active.
The researchers ensured the design of the device allows it to be easily scaled depending on the size of the animal, with a few hundred wires for a mouse to over 100,000 for larger mammals. This is a key feature of the device as it means it holds potential, in the future, to be scaled for use with humans.
Mihaly Kollo, co-lead author, postdoc at the Crick's neurophysiology of behaviour laboratory and senior research associate at UCL, says: "One of the great challenges in recording brain activity, especially in deeper regions, is how to get the wires, called electrodes, in position without causing a lot of tissue damage or bleeding. Our method overcomes this by using electrodes that are sufficiently thin.
"Another challenge is recording the activity of many neurons that are that are distributed in layers with complex shapes in the three-dimensional space. Again, our method provides a solution as the wires can be readily arranged into any 3D shape."
The technology described in the study is also the basis for a fully integrated brain- computer interface system that is being developed by Paradromics, a company founded by Matthew Angle, one of the authors of this paper. The Texas-based company is working to develop a medical device platform that will improve the lives of people with critical diseases, including paralysis, sensory impairment and drug resistant neuropsychiatric diseases.
Story Source:
Materials provided by The Francis Crick Institute. Note: Content may be edited for style and length.
Journal Reference:
1. Abdulmalik Obaid, Mina-Elraheb Hanna, Yu-Wei Wu, Mihaly Kollo, Romeo Racz, Matthew R. Angle, Jan Müller, Nora Brackbill, William Wray, Felix Franke, E. J. Chichilnisky, Andreas Hierlemann, Jun B. Ding, Andreas T. Schaefer, Nicholas A. Melosh. Massively parallel microwire arrays integrated with CMOS chips for neural recording. Science Advances, 2020; 6 (12): eaay2789 DOI: 10.1126/sciadv.aay2789
www.sciencedaily.com/releases/2020/03/200320192743.htm
|
|
|
Post by SysConfig on Apr 3, 2020 6:27:05 GMT
|
|
|
Post by swamprat on Apr 3, 2020 15:23:22 GMT
A 'cardiac patch with bioink' developed to repair heart
Date: March 30, 2020
Source: Pohang University of Science & Technology (POSTECH)
Summary:
Medical researchers have developed an 'in vivo priming' with heart-derived bioink. Using engineered stem cells and 3D bioprinting technology, they began developing medicines for cardiovascular diseases.
The heart is the driving force of circulating blood in the body and pumps blood to the entire body by repeating contraction and relaxation of the heart muscles continuously. Human stem cells are used in the clinical therapies of a dead heart, which happens when a blood vessel is clogged or whole or a part of heart muscles is damaged. The clinical use of human bone marrow-derived mesenchymal stem cells (BM-MSCs) have been expanded but failure of the transplanted stem cells in the heart still remains a problem. Recently, an international joint research team of POSTECH, Seoul St. Mary's Hospital, and City University of Hong Kong developed a 'cardiac patch with bioink' that enhanced the functionality of stem cells to regenerate blood vessels, which in turn improved the myocardial infarction affected area.
The joint research team consisted of Prof. Jinah Jang and Dr. Sanskrita Das of POSTECH Creative IT Engineering, Mr. Seungman Jung of POSTECH School of Interdisciplinary Bioscience and Bioengineering, Prof. Hun-Jun Park, Mr. Bong-Woo Park, and Ms. Soo-Hyun Jung of The Catholic University, and Prof. Kiwon Ban and his fellows from City University of Hong Kong. The team mixed genetically engineered stem cells (genetically engineered hepatocyte growth factor-expressing MSCs, HGF-eMSCs) developed by SL Bigen. Co., Ltd to make bioink in the form of a patch and introduced a new therapy by transplanting it to a damaged heart. They called this new strategy as 'in vivo priming'. The name came from the principle that maximized function of mesenchymal stem cells are maintained in vivo as well as through its exposure to the growth factor secreted by the genetically engineered stem cells.
The joint research team first genetically engineered the existing BM-MSCs to produce hepatocyte growth factor consistently to improve the therapeutic potential of stem cells. The engineered stem cells (HGF-eMSCs) were then mixed with BM-MSCs to make the bioink. They transplanted the cardiac patch with this bioink to the heart muscles affected by myocardial infarction. Considering the limited amount of cells that could be transferred, they used heart-derived extracellular matrix bioink to make a cardiac patch.
Implanted cells in a patch survived longer in vivo and had more myocardiocytes survived than the only BM-MSCs transplanted experimental group. This was because the secretion of cytokine, which helps formation of blood vessels and cell growth was maximized and delivered nutrients fluently that promoted vascular regeneration and enhanced survival of the myocardiocytes.
The research team anticipated that this new method could be a breakthrough treatment of myocardial infarction as the implanted stem cells through HGF-eMSCs ultimately enhanced vascular regeneration and improved the myocardial infarction affected area.
"We can augment the function of adult stem cells approved by Ministry of Food and Drug Safety and FDA using this newly developed and promising 3D bioprinting technology with the engineered stem cells. It is our goal to develop a new concept of medicine for myocardial infarction in the near future," said Prof. Jinah Jang who led the research.
POSTECH began to develop medicine for cardiovascular diseases based on this newly developed bioprinting method with the research team from The Catholic University in 2017. Now, it is being tested in animals for efficacy evaluation with Chonnam National University. Also, the technology is already transferred to T&R Biofab, which is a company developing 3D printers, software, and bioinks to print cells.
Story Source:
Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length.
Journal Reference:
1. Bong-Woo Park, Soo-Hyun Jung, Sanskrita Das, Soon Min Lee, Jae-Hyun Park, Hyeok Kim, Ji-Won Hwang, Sunghun Lee, Hyo-Jin Kim, Hey-Yon Kim, Seungman Jung, Dong-Woo Cho, Jinah Jang, Kiwon Ban, Hun-Jun Park. In vivo priming of human mesenchymal stem cells with hepatocyte growth factor–engineered mesenchymal stem cells promotes therapeutic potential for cardiac repair. Science Advances, 2020; 6 (13): eaay6994 DOI: 10.1126/sciadv.aay6994
|
|
|
Post by swamprat on Apr 14, 2020 15:59:52 GMT
This is the Morphoz from Renault. It's a fully electric SUV that transforms from a city car to a travel car within seconds. The back and the front of the car extends by 40cm to allow for an extra battery pack to be fitted, giving the car more range and more power for longer trips. The elongated version of the car also provides more space for luggage and leg room for back passengers. The most beautiful car in the world!
SupercarBlondie
|
|