Differential equations, studying the unsolvable | DE1

Differential equations, studying the unsolvable | DE1

Quoting Steven Strogatz, “Since Newton,
mankind has come to realize that the laws of physics are always expressed in the language
of differential equations.” Of course, this language is spoken well beyond the boundaries
of physics as well, and being able to speak it and read it adds a new color to how you
view the world around you. In the next few videos, I want to give a sort
of tour of this topic. To aim is to give a big picture view of what this part of math
is all about, while at the same time being happy to dig into the details of specific
examples as they come along. I’ll be assuming you know the basics of
calculus, like what derivatives and integrals are, and in later videos we’ll need some
basic linear algebra, but not much beyond that. Differential equations arise whenever it’s
easier to describe change than absolute amounts. It’s easier to say why population sizes
grow or shrink than it is to describe why the have the particular values they do at
some point in time; It may be easier to describe why your love for someone is changing than
why it happens to be where it is now. In physics, more specifically Newtonian mechanics, motion
is often described in terms of force. Force determines acceleration, which is a statement
about change. These equations come in two flavors; Ordinary
differential equations, or ODEs, involving functions with a single input, often thought
of as time, and Partial differential equations, or PDEs, dealing with functions that have
multiple inputs. Partial derivatives are something we’ll look at more closely in the next video;
you often think of them involving a whole continuum of values changing with time, like
the temperature of every point in a solid body, or the velocity of a fluid at every
point in space. Ordinary differential equations, our focus for now, involve only a finite collection
of values changing with time. It doesn’t have to be time, per se, your
one independent variable could be something else, but things changing with time are the
prototypical and most common examples of differential equations.
Physics (simple) Physics offers a nice playground for us here,
with simple examples to start with, and no shortage of intricacy and nuance as we delve
deeper. As a nice warmup, consider the trajectory
of something you throw in the air. The force of gravity near the surface of the earth causes
things to accelerate downward at 9.8 m/s per second. Now unpack what that really means:
If you look at some object free from other forces, and record its velocity every second,
these vectors will accrue an additional downward component of 9.8 m/s every second. We call
this constant 9.8 “g”. This gives an example of a differential equation,
albeit a relatively simple one. Focus on the y-coordinate, as a function of time. It’s
derivative gives the vertical component of velocity, whose derivative in turn gives the
vertical component of acceleration. For compactness, let’s write this first derivative as y-dot,
and the second derivative as y-double-dot. Our equation is simply y-double-dot=-g.
This is one where you can solve by integrating, which is essentially working backwards. First,
what is velocity, what function has -g as a derivative? Well, -g*t. Or rather, -g*t
+ (the initial velocity). Notice that you have this degree of freedom which is determined
by an initial condition. Now what function has this as a derivative? -(½)g*t^2 + v_0
* t. Or, rather, add in a constant based on whatever the initial position is. Things get more interesting when the forces
acting on a body depend on where that body is. For example, studying the motion of planets,
stars and moons, gravity can no longer be considered a constant. Given two bodies, the
pull on one is in the direction of the other, with a strength inversely proportional to
the square of the distance between them. As always, the rate of change of position
is velocity, but now the rate of change of velocity is some function of position. The
dance between these mutually-interacting variables is mirrored in the dance between the mutually-interacting
bodies which they describe. So often in differential equations, the puzzles
you face involve finding a function whose derivative and/or higher order derivatives
are defined in terms of itself. In physics, it’s most common to work with
second order differential equations, which means the highest derivative you find in the
expression here is a second derivative. Higher order differential equations would be ones
with third derivatives, fourth derivatives and so on; puzzles with more intricate clues. The sensation here is one of solving an infinite
continuous jigsaw puzzle. In a sense you have to find infinitely many numbers, one for each
point in time, constrained by a very specific way that these values intertwine with their
own rate of change, and the rate of change of that rate of change. I want you to take some time digging in to
a deceptively simple example: A pendulum. How does this angle theta that it makes with
the vertical change as a function of time. This is often given as an example in introductory
physics classes of harmonic motion, meaning it oscillates like a sine wave. More specifically,
one with a period of 2pi * L/g, where L is the length of the pendulum, and g is gravity. However, these formulas are actually lies.
Or, rather, approximations which only work in the realm of small angles. If you measured
an actual pendulum, you’d find that when you pull it out farther, the period is longer
than what that high-school physics formulas would suggest. And when you pull it really
far out, the value of theta vs. time doesn’t even look like a sine wave anymore. First thing’s first, let’s set up the
differential equation. We’ll measure its position as a distance x along this arc. If
the angle theta we care about is measured in radians, we can write x and L*theta, where
L is the length of the pendulum. As usual, gravity pulls down with acceleration
g, but because the pendulum constrains the motion of this mass, we have to look at the
component of this acceleration in the direction of motion. A little geometry exercise for
you is to show that this little angle here is the same as our theta. So the component
of gravity in the direction of motion, opposite this angle, will be -g*sin(theta). Here we’re considering theta to be positive
when the pendulum is swung to the right, and negative when it’s swung to the left, and
this negative sign in the acceleration indicates that it’s always pointed in the opposite
direction from displacement. So the second derivative of x, the acceleration, is -g*sin(theta).
Since x is L*theta, that means the second derivative of theta is -(g/L) * sin(theta).
To be somewhat more realistic, let’s add in a term to account for air resistance, which
perhaps we model as being proportional to the velocity. We write this as -mu * theta-dot,
where -mu is some constant determining how quickly the pendulum loses energy. This is a particularly juicy differential
equation. Not easy to solve, but not so hard that we can’t reasonably get some meaningful
understanding of it. At first you might think that this sine function
relates to the sine wave pattern for the pendulum. Ironically, though, what you’ll eventually
find is that the opposite is true. The presence of the sine in this equation is precisely
why the real pendulum doesn’t oscillate with the sine wave pattern. If that sounds odd, consider the fact that
here, the sine function takes theta as an input, but the approximate solution has the
value theta itself oscillating as a sine wave. Clearly something fishy is afoot. One thing I like about this example is that
even though it’s comparatively simple, it exposes an important truth about differential
equations that you need to be grapple with: They’re really freaking hard to solve. In this case, if we remove the damping term,
we can just barely write down an analytic solution, but it’s hilariously complicated,
involving all these functions you’re probably never heard of written in terms of integrals
and weird inverse integral problems. Presumably, the reason for finding a solution
is to then be able to make computations, and to build an understanding for whatever dynamics
your studying. In a case like this, those questions have just been punted off to figuring
out how to compute and understand these new functions. And more often, like if we add back this dampening
term, there is not a known way to write down an exact solution analytically. Well, for
any hard problem you could just define a new function to be the answer to that problem.
Heck, even name it after yourself if you want. But again, that’s pointless unless it leads
you to being able to compute and understand the answer. So instead, in studying differential equations,
we often do a sort of short-circuit and skip the actual solution part, and go straight
to building understanding and making computations from the equations alone. Let me walk through
what that might look like with the Pendulum. Phase space
What do you hold in your head, or what visualization could you get some software to pull up for
you, to understand the many possible ways a pendulum governed by these laws might evolve
depending on its starting conditions? You might be tempted to try imagining the
graph of theta(t), and somehow interpreting how its position, slope, and curvature all
inter-relate. However, what will turn out to be both easier and more general is to start
by visualizing all possible states of the system in a 2d plane. The state of the pendulum can be fully described
by two numbers, the angle, and the angular velocity. You can freely change these two
values without necessarily changing the other, but the acceleration is purely a function
of these two values. So each point of this 2d plane fully describes the pendulum at a
given moment. You might think of these as all possible initial conditions of the pendulum.
If you know this initial angle and angular velocity, that’s enough to predict how the
system will evolve as time moves forward. If you haven’t worked with them, these sorts
of diagrams can take a little getting used to. What you’re looking at now, this inward
spiral, is a fairly typical trajectory for our pendulum, so take a moment to think carefully
about what’s being represented. Notice how at the start, as theta decreases, theta-dot
gets more negative, which makes sense because the pendulum moves faster in the leftward
direction as it approaches the bottom. Keep in mind, even though the velocity vector on
this pendulum is pointed to the left, the value of that velocity is being represented
by the vertical component of our space. It’s important to remind yourself that this state
space is abstract, and distinct from the physical space where the pendulum lives and moves. Since we’re modeling it as losing some energy
to air resistance, this trajectory spirals inward, meaning the peak velocity and displacement
each go down by a bit with each swing. Our point is, in a sense, attracted to the origin
where theta and theta-dot both equal 0. With this space, we can visualize a differential
equation as a vector field. Here, let me show you what I mean. The pendulum state is this vector, [theta,
theta-dot]. Maybe you think of it as an arrow, maybe as a point; what matters is that it
has two coordinates, each a function of time. Taking the derivative of that vector gives
you its rate of change; the direction and speed that it will tend to move in this diagram.
That derivative is a new vector, [theta-dot, theta-double-dot], which we visualize as being
attached to the relevant point in this space. Take a moment to interpret what this is saying. The first component for this rate-of-change
vector is theta-dot, so the higher up we are on the digram, the more the point tends to
move to the right, and the lower we are, the more it tends to move to the left. The vertical
component is theta-double-dot, which our differential equation lets us rewrite entirely in terms
of theta and theta-dot. In other words, the first derivative of our state vector is some
function of that vector itself. Doing the same at all points of this space
will show how the state tends to change from any position, artificially scaling down the
vectors when we draw them to prevent clutter, but using color to loosely indicate magnitude. Notice that we’ve effectively broken up
a single second order equation into a system of two first order equations. You might even
give theta-dot a different name to emphasize that we’re thinking of two separate values,
intertwined via this mutual effect they have on one and other’s rate of change. This
is a common trick in the study of differential equations, instead of thinking about higher
order changes of a single value, we often prefer to think of the first derivative of
vector values. In this form, we have a nice visual way to
think about what solving our equation means: As our system evolves from some initial state,
our point in this space will move along some trajectory in such a way that at every moment,
the velocity of that point matches the vector from this vector field. Keep in mind, this
velocity is not the same thing as the physical velocity of our pendulum. It’s a more abstract
rate of change encoding the changes in both theta and theta-dot. You might find it fun to pause for a moment
and think through what exactly some of these trajectory lines say about possible ways the
pendulum evolves for different starting conditions. For example, in regions where theta-dot is
quite high, the vectors guide the point to travel to the right quite a ways before settling
down into an inward spiral. This corresponds to a pendulum with a high initial velocity,
fully rotating around several times before settling down into a decaying back and forth. Having a little more fun, when I tweak this
air resistance term mu, say increasing it, you can immediately see how this will result
in trajectories that spiral inward faster, which is to say the pendulum slows down faster.
Imagine you saw the equations out of context, not knowing they described a pendulum; it’s
not obvious just-looking at them that increasing the value of mu means the system tends towards
some attracting state faster, so getting some software to draw these vector fields for you
can be a great way to gain an intuition for how they behave. What’s wonderful is that any system of ordinary
differential equations can be described by a vector field like this, so it’s a very
general way to get a feel for them. Usually, though, they have many more dimensions.
For example, consider the famous three-body problem, which is to predict how three masses
in 3d space will evolve if they act on each other with gravity, and you know their initial
positions and velocities. Each mass has three coordinates describing
its position and three more describing its momentum, so the system has 18 degrees of
freedom, and hence an 18-dimensional space of possible states. It’s a bizarre thought,
isn’t it? A single point meandering through and 18-dimensional space we cannot visualize,
obediently taking steps through time based on whatever vector it happens to be sitting
on from moment to moment, completely encoding the positions and momenta of 3 masses in ordinary,
physical, 3d space. (In practice, by the way, you can reduce this
number of dimension by taking advantage of the symmetries in your setup, but the point
of more degrees of freedom resulting in a higher-dimensional state space remains the
same). In math, we often call a space like this a
“phase space”. You’ll hear me use the term broadly for spaces encoding all kinds
of states for changing systems, but you should know that in the context of physics, especially
Hamiltonian mechanics, the term is often reserved for a special case. Namely, a space whose
axes represent position and momentum. So a physicist would agree that the 18-dimension
space describing the 3-body problem is a phase space, but they might ask that we make a couple
of modifications to our pendulum set up for it to properly deserve the term. For those
of you who watched the block collision videos, the planes we worked with there would happily
be called phase spaces by math folk, though a physicist might prefer other terminology.
Just know that the specific meaning may depend on your context. It may seem like a simple idea, depending
on how well indoctrinated you are to modern ways of thinking about math, but it’s worth
keeping in mind that it took humanity quite a while to really embrace thinking of dynamics
spatially like this, especially when the dimensions get very large. In his book Chaos, James Gleick
describes phase space as “one of the most powerful inventions of modern science.” One reason it’s powerful is that you can
ask questions not just about a single initial state, but a whole spectrum of initial states.
The collection of all possible trajectories is reminiscent of a moving fluid, so we call
it phase flow. To take one example of why phase flow is a
fruitful formulation, the origin of our space corresponds to the pendulum standing still;
and so does this point over here, representing when the pendulum is balanced upright. These
are called fixed points of the system, and one natural question to ask is whether they
are stable. That is, will tiny nudges to the system result in a state that tends back towards
the stable point or away from it. Physical intuition for the pendulum makes the answer
here obvious, but how would you think about stability just by looking at the equations,
say if they arose from some completely different and less intuitive context? We’ll go over how to compute the answer
to a question like this in following videos, and the intuition for the relevant computations
are guided heavily by the thought of looking at a small region in this space around the
fixed point and asking about whether the flow contracts or expands its points. Speaking of attraction and stability, let’s
take a brief sidestep to talk about love. The Strogatz quote I referenced earlier comes
from a whimsical column in the New York Times on mathematical models of love, an example
well worth pilfering to illustrate that we’re not just talking about physics. Imagine you’ve been flirting with someone,
but there’s been some frustrating inconsistency to how mutual the affections seem. And perhaps
during a moment when you turn your attention towards physics to keep your mind off this
romantic turmoil, mulling over your broken up pendulum equations, you suddenly understand
the on-again-off-again dynamics of your flirtation. You’ve noticed that your own affections
tend to increase when your companion seems interested in you, but decrease when they
seem colder. That is, the rate of change for your love is proportional to their feelings
for you. But this sweetheart of yours is precisely
the opposite: Strangely attracted to you when you seem uninterested, but turned off once
you seem too keen. The phase space for these equations looks
very similar to the center part of your pendulum diagram. The two of you will go back and forth
between affection and repulsion in an endless cycle. A metaphor of pendulum swings in your
feelings would not just be apt, but mathematically verified. In fact, if your partner’s feelings
were further slowed when they feel themselves too in love, let’s say out of a fear of
being made vulnerable, we’d have a term matching the friction of your pendulum, and
you two would be destined to an inward spiral towards mutual ambivalence. I hear wedding
bells already. The point is that two very different-seeming
laws of dynamics, one from physics initially involving a single variable, and another from…er…chemistry
with two variables, actually have a very similar structure, easier to recognize when looking
at their phase spaces. Most notably, even though the equations are different, for example
there’s no sine in your companion’s equation, the phase space exposes an underlying similarity
nevertheless. In other words, you’re not just studying
a pendulum right now, the tactics you develop to study one case have a tendency to transfer
to many others. Okay, so phase diagrams are a nice way to
build understanding, but what about actually computing the answer to our equation? Well,
one way to do this is to essentially simulate what the world will do, but using finite time
steps instead of the infinitesimals and limits defining calculus. The basic idea is that if you’re at some
point on this phase diagram, take a step based on whatever vector your sitting on for some
small time step, delta-t. Specifically, take a step of delta-T times that vector. Remember,
in drawing this vector field, the magnitude of each vector has been artificially scaled
down to prevent clutter. Do this repeatedly, and your final location will be an approximation
of theta(t), where t is the sum of all your time steps. If you think about what’s being shown right
now, and what that would imply for the pendulum’s movement, you’d probably agree it’s grossly
inaccurate. But that’s just because the timestep delta-t of 0.5 is way too big. If
we turn it down, say to 0.01, you can get a much more accurate approximation, it just
takes many more repeated steps is all. In this case, computing theta(10) requires a
thousand little steps. Luckily, we live in a world with computers, so repeating a simple
task 1,000 times is as simple as articulating that task with a programming language. In fact, let’s write a little python program
that computes theta(t) for us. It will make use of the differential equation, which returns
the second derivative of theta as a function of theta and theta-dot. You start by defining
two variables, theta and theta-dot, in terms of some initial values. In this case I’ll
choose pi / 3, which is 60-degrees, and 0 for the angular velocity. Next, write a loop which corresponds to many
little time steps between 0 and 10, each of size delta-t, which I’m setting to be 0.01
here. In each step of the loop, increase theta by theta-dot times delta-t, and increase theta-dot
by theta-double-dot times delta-t, where theta-double-dot can be computed based on the differential
equation. After all these little steps, simple return the value of theta. This is called solving the differential equation
numerically. Numerical methods can get way more sophisticated and intricate to better
balance the tradeoff between accuracy and efficiency, but this loop gives the basic
idea. So even though it sucks that we can’t always
find exact solutions, there are still meaningful ways to study differential equations in the
face of this inability. In the following videos, we will look at several
methods for finding exact solutions when it’s possible. But one theme I’d like to focus
is on is how these exact solutions can also help us study the more general unsolvable
cases. But it gets worse. Just as there is a limit
to how far exact analytic solutions can get us, one of the great fields to have emerged
in the last century, chaos theory, has exposed that there are further limits on how well
we can use these systems for prediction, with or without exact solutions. Specifically,
we know that for some systems, small variations to the initial conditions, say the kind due
to necessarily imperfect measurements, result in wildly different trajectories. We’ve
even built some good understanding for why this happens. The three body problem, for
example, is known to have seeds of chaos within it. So looking back at that quote from earlier,
it seems almost cruel of the universe to fill its language with riddles that we either can’t
solve, or where we know that any solution would be useless for long-term prediction
anyway. It is cruel, but then again, that should be reassuring. It gives some hope that
the complexity we see in the world can be studied somewhere in the math, and that it’s
not hidden away in some mismatch between model and reality.

Only registered users can comment.

  1. so we are using matrix vectors to compactly encapsulate information about these rates of change of different orders for easy analysis

  2. You youngsters have it damned easy these days. When I learned these concepts,, I did literally have to "do the math", as my only assistant was an 8 dollar scientific calculator from radio shack. It had no clue what a derivative was…. I think you folks don't really get a true grasp of the concepts when a computer does your work for you.

  3. TBH most engineers don't solve diff equations analystically but numerically, hopefully 3b1b makes a numerical analysis video soon

  4. Oh my god, 24:20, that’s the problem, that is the problem with most simulations, that needs to go, like, completely.

  5. Saw this video once cause I was curious, then my math for physics teacher got all crazy and jumped from derivatives to this and well, here we are again.
    (I'm in secondary school)

  6. Amazing video, thanks Grant ! For those who might be interested, I managed to reproduced the phase space for the same differential equation with a small python script : https://gist.github.com/OmarAflak/c08e98a6d32c12231899f7ffd8c89b40

  7. Thank you for making these educational videos; they are mesmerizing! I appreciate your explanations, analogies, animations, and background music. You are making math accessible to more people! These videos are calming yet thrilling. I look forward to learning from you!

  8. Some notes on the intended use of this series. I was deliberate in using the phrase "tour of differential equations", as opposed to "introduction to" or "essence of". I think of the relationship between watching this series and taking a course as being analogous to the relationship between touring a city vs. living in it. You'll certainly see a lot less with the tour since you're spending less time overall, but the goal will be to walk around some of the most noteworthy monuments and town centers with helpful context given to you by a guide. And just as someone who lives in a city may very well have never gone to visit some of the historical sites of their town, despite living there for years, many differential equations students may not always get the chance to zoom out and appreciate the central cornerstones of the subject amidst all the computations they are learning.

    I hope you enjoy the tour, but at the same time know that it is, by design, very different from taking courses on the subject.

  9. There are currently 185 people who wasted their time and money on women's studies, fine art, communications, or psychology weighing in on this video's value. It's not too late, the STEM fields are open to everyone.

  10. @7:33, move the yellow gravity and pink vector such that the gravity vector aligns itself with the perpendicular line, you will see alternate angles forming.. and alternate angles are equal..

  11. I realize inertial/non-inertial points of reference are outside the scope of this discussion –but gravity isn't a "force." Maybe you could go into a greater detail later (or reference an external link).

  12. I was watching a course on human behavior from Stanford, by Robert Sapolsky, and the professor recommended the book “Chaos”, by Gleick. I then started reading it and I got interested in learning more about chaos. And then suddenly I realized that dynamic systems and differential equations were important, so I started watching courses on differential equations. And then YouTube recommended this video (I already knew this channel but hadn’t watched this video yet). And then this video also recommends Gleick’s book!

    I enjoy these small world “coincidences”!! Of course it was an unfair coincidence, as these subjects are related and very probable to be found while I am interested in the subject. But you always get caught in them.

  13. If you’re wondering what the solution to the fox and rabbit equation is, there isn’t an explicit equation as a function of time. However, it can be plotted implicitly with x=rabbits, y=foxes. δx+βy-γln(x)-αln(y)=V, with V being constant for all t. When you plot it as a function of time, you get periodic functions repeating every 2π/√(αγ) years (or whatever unit of time your using). There also exist two equilibrium points where nothing ever changes: x=0, y=0 (extinction), or x=γ/δ, y=α/β. The first is a saddle point, the second is an elliptic. If you plot for x and y infinitesimally close to the second equilibrium, you’ll generate an x & y that circle around the equilibrium in an elliptical pattern. Technically, you could represent the exact path taken by x & y in any situation by using a Fourier transform, but I couldn’t really be bothered to go that far 🤷‍♂️

  14. I'm smiling right now, after finishing the video and searching the comments
    "Damn it."
    "You didn't give us the answer, 3blue1brown."
    "Of course."

  15. At the undergraduate level, it's harder to come up with the derivation than it is to do computations after you have the formula. At the graduate level, it seems harder to do computations after you have the formula than it is to come up with the derivation.

  16. All teachers in my life don't able to teach 10 perc of DE part in a visual form you taught in this video .I started hating solving DEs as I'm not able to analyse them you deserve a highest prize in science for your presentations

  17. At age 14, I wrote the same program on Applesoft Basic to guide a virtual missile to hit a virtual plane, it was a fiasco. Not knowing numerical methods and ODE's. And of course not understanding that missile guidance by always accelerating towards the current position of the target would never work (LOL). Instead of drawing that conclusion I concluded science is rubbish and (almost) quit.

  18. This is the first B&B video in which I've noticed the Pis, teacher and students, develop knees.
    I think it's about time they developed a Pulitzer Prize for computer animation in the news. Grant's superb videos would be an obvious contender, and another I've seen recently was the New York Times' superb, and very scary, animations of the problem facing the dying pilots of the doomed Boeing 737s.

  19. You just explained something to me in 27 minutes in a way that I understood it, what three professors failed to explain to me in cumulatively six semesters. I'm equal parts happy and sad now.

  20. That's it Sir, the things for which I was just wandering around , such an incredible visualisation and deep insight the concepts are just making me fall in love with all the topics you thought . Thanks a lot …..

  21. with the time-step example given, is there a way to figure out which side you are wrong on? I noticed that the arrows tended to estimate too far to the outward side, at least in the initial spiral: could you find some way to estimate and account for the error this induces?

  22. My biggest difficulty with differential equations is the lack of formal representation in writing that you mentioned.

  23. Unfortunately, I failed to understand the complete video. Your series are just awesome but this video is a bit higher level than my current understanding, I thought!
    But really Thanks, for your efforts!

  24. How Awesome. How if make differential and correction to solved problem of me Brain Cerebellum I am trust Differential Theory. [Rikki Pebirianto Dedy Max Simanjorang.]

  25. All of this is understandable to me, and almost even intuitive….except the freaking formulas. As soon as we move from visualization to numerical representation my brain just shuts down.

  26. holy crap, i just watched this video after a semester of differential equations and numerical analysis and i think i appreciate this video way more than what i would've had i not taken those modules. everything you said i already knew (not saying it was a bad video). So actually seeing the differential eq of a pendulum and it's application was so awesome. i remember my lecturer in physics saying that our equation only works for small angles. nice video man

  27. how do you do this in manim

    from active projects.one.part1.shared constructs import * ModuleNotFoundError: No module named 'active_projects'

  28. Can you make a video on physical significance of residue in complex integral?
    Is it related to divergence and curl of vector field? If yes, how?

  29. Bruh these hoes so difficult to understand you really gotta take Maths 3 in college and a Computer modelling class to understand em

  30. I am watching this pendulum example and wondering when will you actually use the word "phase space" for this imaginary two dimensional plane which contains all the possible initial conditions for position and velocity and fully defines the dynamics of the system. And everyone is used to them.

    The answer is 18:26. Such a relief.

  31. On computers one might think that simulations of physical systems like pendula should calculate sines and cosines for each frame of animation, but that is too slow to model complex systems, since calculating a sine requires several arithmetic operations to provide sufficient precision.

    There is an interesting mathematical area devoted to efficient yet accurate calculation of iterative approximate integrations. Basically, with sufficient precision it is possible to do an accurate integration step with one addition (after all, integration is just addition of very tiny function slices with the slice size approaching zero). The result is that high-speed animations of systems requiring many separate integrations of various orders are not difficult to program.

    Some of my first professional software, in 1966, supported the rotation in displayed realtime 3D stereo pairs of experimental data plots on a simple computer (the LINC) with a very tiny memory that had no way to compute sines or cosines. I used high-precision addition and subtration, along with a table of 90 sine values to convert from degrees to a more natural internal angle representation.

    While thinking about the phase space that describes how the "two lovers" situation should evolve I noticed something. The origin of the graph to which 3b1b refers to as a "situation of mutual ambivalence" is a actually a situation were both the lovers don't love each other any more as both the variables (❤️1 and ❤️2 ) are 0. I think this tells a lot about long term relationships. Sorry for the deliberately expressed pessimism, 🙂 . 22:16

  33. 10:46 any one here knows. When to just skip the solution part? I know mathematicians do it when it gets hard, but… when does it gets complicated?

  34. Hey,I wasn’t able to understand what he actual did in the ‘python programming’ .
    I will be grateful if anyone can explain it to me a little bit .


  35. I feel like there has to be some guidance around picking a step size (e.g. delta-t) for the numerical solution. He's already hinted about the Fourier series and we do have the Nyquist sampling theorem that sets a minimum sampling frequency for your desired resolution but I haven't quite put those together yet. It also occurs to me that in PDE's, the step sizes for each of your inputs can be different.

Leave a Reply

Your email address will not be published. Required fields are marked *