Tesla FSD 10.69 release notes, Is all about the left turns, smooth driving, deep lane guidance model, and some safety talks also.
Elon musk posted on Twitter to aware tesla owners about the FSD beta 10.69 version is rolled out for the tester over the weekend, which is all about the left turns, object detections, smooth driving, and safety improvements.
As you know On 19 July Tesla started rolling out its FSD Beta 10.13 version, which is all about animal detections, left/right turns, and speed limits, Tesla’s Ai director, Andrej karpathy also talk about the 10.13 version on Twitter, and also Tesla vocal member also shares some improvements results that will be shown in 10.13 update such as animal, speed limits much more…
On Monday, musk announced that we started rolling out a new version 10.69, which is able to handle complex turns easily, and detect relevant objects much better compared to the previous version.
In 10.69, where 69 indicates that version sub-types, musk directly shifts its 10.13 version to 10.69, saying that 69 is his favorite number so far i.e he skipped the (.56) sub-type and known as Fireware (2022.16.3.10).
As I above said, this 10.69 version is only available for the tester of tesla and the internal testing employee, around =~1,000 Tesla owners would receive this update notes, and after the testing phase is completed tesla will enable other 1,000 tesla customers.
This first update was rolled out on 21 August, This also seems that musk tased to caught last hours, which is the formulation of 8/20= 2 x 4/20.
This 10.69 version will entirely update the tesla core architecture with the new module “deep lane guidance” which is capable of handling left turns, smooth drive, and relevant object detections.
Tesla also confirmed that this new module “deep lane guidance” reduce the 44% of error rate on a lane topology when compared to its previous version, has the capability to handle smooth control before lines, analyze better left/right turns, and also increases connectivity.
So, Tesla listed 20 points in release notes of its tesla software FSD Beta 10.69, which we will discuss in the detail with some examples.
Apart from the telescope also posted on Twitter of FSD Beta 10.69: below
Dirty Tesla also posted, Well #FSDBeta 10.69 downloading now. Video later today 🙂
Trajectory planner: To improve the smoothness of left turns tesla introduces a new Trajectory planner which is responsible for the latency from steering command to actual steering and brake, acceleration command actuation.
Trajectory also helps, how to drive vehicles which enables better downstream controller and focus on smoothness, provides more accurate response during hash conditions.
Smoothness– Tesla also focuses on smoothness by protecting right turns while in traffic lights with slip lanes, also reducing falsy slowdowns(points 7 & 8) when there is no relevant object present over there, also improve Yielding position when there is a relevant object is present.
UFO Point 4 – In point 4-tesla eliminates UFOs, which means UFOs help Tesla vehicles in the brake technology also give better discuss in fog situations is completely based on braking system Hardware HW1, MUSK ALSO talked about later 2017.
So, a radar-based braking system is completely eliminated from tesla vehicles, Now it follows the new algorithm VISCO to all FSD beta.
@DirtyTesla also made a video to explain improvements in the FSD beta
Forward Creeping-Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of the presence of traffic controls.
Red-light better– Tesla also added a faster response of its red-light identifications by evaluating its kinematic phase to braking profile.
Tesla FSD 10.69 release notes
- Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities become visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
- Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. The trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.
- Improved unprotected left turns with a more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high-speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing an optimizable initial jerk, to mimic the harsh pedal press by a human when required to go in front of high-speed objects. Also improved lateral profile approaching such safety regions to allow for a better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.
- Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
- Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables the prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increased the dataset size by 2.4x.
- Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one-tenth of the compute.
- Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves the yielding position when they are present.
- Reduced false slowdowns near crosswalks. This was done with an improved understanding of pedestrian and bicyclist intent based on their motion.
- Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.
- Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.
- Improved recall of animals by 34% by doubling the size of the auto-labeled training set.
- Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of the presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
- Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.
- Improved velocity error for pedestrians and bicyclists by 17%, especially when the ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.
- Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.
- Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.
- Improved speed when entering highways by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.
- Reduced latency when starting from a stop by accounting for lead vehicle jerk.
- Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.
This 10.69 update was first rolled out in Tesla model 3 Oregon, the United States, after that it would available to the next 1,000 owners.