Add pre-commit hooks (#1629)

old-commit-hash: ab83e48ec4
commatwo_master
Adeeb 5 years ago committed by GitHub
parent e6cf5883b1
commit d035394ce7
  1. 4
      .pre-commit-config.yaml
  2. 4
      SAFETY.md
  3. 6
      common/transformations/README.md
  4. 16
      selfdrive/camerad/cameras/camera_qcom2.c
  5. 4
      selfdrive/camerad/cameras/sensor2_i2c.h
  6. 12
      selfdrive/camerad/imgproc/conv.cl
  7. 4
      selfdrive/camerad/imgproc/pool.cl
  8. 2
      selfdrive/camerad/transforms/rgb_to_yuv.c
  9. 4
      selfdrive/common/buffering.c
  10. 2
      selfdrive/common/framebuffer.cc
  11. 2
      selfdrive/locationd/paramsd.cc
  12. 2
      selfdrive/loggerd/frame_logger.h
  13. 2
      selfdrive/loggerd/loggerd.cc
  14. 2
      selfdrive/modeld/modeld.cc
  15. 4
      selfdrive/modeld/thneed/debug/main.cc
  16. 4
      selfdrive/modeld/thneed/debug/microbenchmark/gemm_image.cl
  17. 4
      selfdrive/modeld/thneed/debug/microbenchmark/go.c
  18. 2
      selfdrive/ui/text/text.c
  19. 6
      tools/lib/vidindex/vidindex.c
  20. 2
      tools/nui/FileReader.cpp
  21. 4
      tools/nui/Unlogger.cpp
  22. 18
      tools/nui/main.cpp
  23. 1
      tools/nui/nui
  24. 10
      tools/sim/README.md
  25. 2
      tools/sim/start_carla.sh
  26. 70
      tools/webcam/README.md

@ -6,6 +6,10 @@ repos:
- id: check-json
- id: check-xml
- id: check-yaml
- id: check-merge-conflict
- id: check-symlinks
- id: trailing-whitespace
exclude: '^(pyextra)|(external)|(cereal)|(rednose)|(panda)|(laika)|(opendbc)|(laika_repo)|(rednose_repo)|(phonelibs)|(lib_mpc_export)/'
- repo: https://github.com/pre-commit/mirrors-mypy
rev: master
hooks:

@ -1,7 +1,7 @@
openpilot Safety
======
openpilot is an Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) system.
openpilot is an Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) system.
Like other ACC and ALC systems, openpilot is a failsafe passive system and it requires the
driver to be alert and to pay attention at all times.
@ -22,7 +22,7 @@ hardware-in-the-loop and in-vehicle tests before each software release.
Following Hazard and Risk Analysis and FMEA, at a very high level, we have designed openpilot
ensuring two main safety requirements.
1. The driver must always be capable to immediately retake manual control of the vehicle,
1. The driver must always be capable to immediately retake manual control of the vehicle,
by stepping on either pedal or by pressing the cancel button.
2. The vehicle must not alter its trajectory too quickly for the driver to safely
react. This means that while the system is engaged, the actuators are constrained

@ -25,14 +25,14 @@ by generating a rotation matrix and multiplying.
Orientation Conventations
------
Quaternions, rotation matrices and euler angles are three
Quaternions, rotation matrices and euler angles are three
equivalent representations of orientation and all three are
used throughout the code base.
For euler angles the preferred convention is [roll, pitch, yaw]
which corresponds to rotations around the [x, y, z] axes. All
euler angles should always be in radians or radians/s unless
for plotting or display purposes. For quaternions the hamilton
for plotting or display purposes. For quaternions the hamilton
notations is preferred which is [q<sub>w</sub>, q<sub>x</sub>, q<sub>y</sub>, q<sub>z</sub>]. All quaternions
should always be normalized with a strictly positive q<sub>w</sub>. **These
quaternions are a unique representation of orientation whereas euler angles
@ -49,7 +49,7 @@ EONs are not all mounted in the exact same way. To compensate for the effects of
Example
------
To transform global Mesh3D positions and orientations (positions_ecef, quats_ecef) into the local frame described by the
To transform global Mesh3D positions and orientations (positions_ecef, quats_ecef) into the local frame described by the
first position and orientation from Mesh3D one would do:
```
ecef_from_local = rot_from_quat(quats_ecef[0])

@ -261,7 +261,7 @@ void sensors_init(int video0_fd, int sensor_fd, int camera_num) {
power->power_settings[2].power_seq_type = 2; // digital
power->power_settings[3].power_seq_type = 8; // reset low
power = (void*)power + (sizeof(struct cam_cmd_power) + (power->count-1)*sizeof(struct cam_power_settings));
unconditional_wait = (void*)power;
unconditional_wait->cmd_type = CAMERA_SENSOR_CMD_TYPE_WAIT;
unconditional_wait->delay = 5;
@ -424,7 +424,7 @@ void config_isp(struct CameraState *s, int io_mem_handle, int fence, int request
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
memcpy(buf2, tmp, sizeof(tmp));
if (io_mem_handle != 0) {
@ -610,7 +610,7 @@ static void camera_open(CameraState *s, VisionBuf* b) {
acquire_dev_cmd.handle_type = CAM_HANDLE_USER_POINTER;
acquire_dev_cmd.num_resources = 1;
acquire_dev_cmd.resource_hdl = (uint64_t)&isp_resource;
isp_resource.resource_id = CAM_ISP_RES_ID_PORT;
isp_resource.length = sizeof(struct cam_isp_in_port_info) + sizeof(struct cam_isp_out_port_info)*(1-1);
isp_resource.handle_type = CAM_HANDLE_USER_POINTER;
@ -643,7 +643,7 @@ static void camera_open(CameraState *s, VisionBuf* b) {
in_port_info->test_pattern = 0x2; // 0x3?
in_port_info->usage_type = 0x0;
in_port_info->left_start = 0x0;
in_port_info->left_stop = FRAME_WIDTH - 1;
in_port_info->left_width = FRAME_WIDTH;
@ -664,10 +664,10 @@ static void camera_open(CameraState *s, VisionBuf* b) {
in_port_info->num_out_res = 0x1;
in_port_info->data[0] = (struct cam_isp_out_port_info){
.res_type = CAM_ISP_IFE_OUT_RES_RDI_0,
.res_type = CAM_ISP_IFE_OUT_RES_RDI_0,
//.format = CAM_FORMAT_MIPI_RAW_12,
.format = CAM_FORMAT_MIPI_RAW_10,
.width = FRAME_WIDTH,
.width = FRAME_WIDTH,
.height = FRAME_HEIGHT,
.comp_grp_id = 0x0, .split_point = 0x0, .secure_mode = 0x0,
};
@ -700,7 +700,7 @@ static void camera_open(CameraState *s, VisionBuf* b) {
CAM_SENSOR_PACKET_OPCODE_SENSOR_CONFIG);
sensors_i2c(s, start_reg_array, sizeof(start_reg_array)/sizeof(struct i2c_random_wr_payload),
CAM_SENSOR_PACKET_OPCODE_SENSOR_STREAMON);
sensors_i2c(s, stop_reg_array, sizeof(stop_reg_array)/sizeof(struct i2c_random_wr_payload),
sensors_i2c(s, stop_reg_array, sizeof(stop_reg_array)/sizeof(struct i2c_random_wr_payload),
CAM_SENSOR_PACKET_OPCODE_SENSOR_STREAMOFF);
// config csiphy
@ -817,7 +817,7 @@ void cameras_open(DualCameraState *s, VisionBuf *camera_bufs_rear, VisionBuf *ca
s->rear.device_iommu = s->front.device_iommu = s->wide.device_iommu = device_iommu;
s->rear.cdm_iommu = s->front.cdm_iommu = s->wide.cdm_iommu = cdm_iommu;
// subscribe
// subscribe
LOG("-- Subscribing");
static struct v4l2_event_subscription sub = {0};
sub.type = 0x8000000;

@ -302,8 +302,8 @@ struct i2c_random_wr_payload init_array_ar0231[] = {
};
struct i2c_random_wr_payload poke_array_ov7750[] = {
{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0},
//{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0},
{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0},
//{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0},
};
struct i2c_random_wr_payload preinit_array_ov7750[] = {

@ -3,7 +3,7 @@
// convert input rgb image to single channel then conv
__kernel void rgb2gray_conv2d(
const __global uchar * input,
const __global uchar * input,
__global short * output,
__constant short * filter,
__local uchar3 * cached
@ -23,8 +23,8 @@ __kernel void rgb2gray_conv2d(
// pad
if (
get_global_id(0) < HALF_FILTER_SIZE ||
get_global_id(0) > IMAGE_W - HALF_FILTER_SIZE - 1 ||
get_global_id(0) < HALF_FILTER_SIZE ||
get_global_id(0) > IMAGE_W - HALF_FILTER_SIZE - 1 ||
get_global_id(1) < HALF_FILTER_SIZE ||
get_global_id(1) > IMAGE_H - HALF_FILTER_SIZE - 1
)
@ -32,11 +32,11 @@ __kernel void rgb2gray_conv2d(
barrier(CLK_LOCAL_MEM_FENCE);
return;
}
else
else
{
int localColOffset = -1;
int globalColOffset = -1;
// cache extra
if ( get_local_id(0) < HALF_FILTER_SIZE )
{
@ -51,7 +51,7 @@ __kernel void rgb2gray_conv2d(
{
localColOffset = get_local_id(0) + TWICE_HALF_FILTER_SIZE;
globalColOffset = HALF_FILTER_SIZE;
cached[ myLocal + HALF_FILTER_SIZE ].x = input[ my * 3 + HALF_FILTER_SIZE * 3 ];
cached[ myLocal + HALF_FILTER_SIZE ].y = input[ my * 3 + HALF_FILTER_SIZE * 3 + 1];
cached[ myLocal + HALF_FILTER_SIZE ].z = input[ my * 3 + HALF_FILTER_SIZE * 3 + 2];

@ -1,6 +1,6 @@
// calculate variance in each subregion
__kernel void var_pool(
const __global char * input,
const __global char * input,
__global ushort * output // should not be larger than 128*128 so uint16
)
{
@ -11,7 +11,7 @@ __kernel void var_pool(
float fsum = 0;
char mean, max;
for (int i = 0; i < size; i++) {
int x_offset = i % X_PITCH;
int y_offset = i / X_PITCH;

@ -43,7 +43,7 @@ void rgb_to_yuv_queue(RGBToYUVState* s, cl_command_queue q, cl_mem rgb_cl, cl_me
err = clSetKernelArg(s->rgb_to_yuv_krnl, 1, sizeof(cl_mem), &yuv_cl);
assert(err == 0);
const size_t work_size[2] = {
(size_t)(s->width + (s->width % 4 == 0 ? 0 : (4 - s->width % 4))) / 4,
(size_t)(s->width + (s->width % 4 == 0 ? 0 : (4 - s->width % 4))) / 4,
(size_t)(s->height + (s->height % 4 == 0 ? 0 : (4 - s->height % 4))) / 4
};
cl_event event;

@ -70,7 +70,7 @@ void tbuffer_dispatch(TBuffer *tb, int idx) {
efd_write(tb->efd);
pthread_cond_signal(&tb->cv);
pthread_mutex_unlock(&tb->lock);
pthread_mutex_unlock(&tb->lock);
}
int tbuffer_acquire(TBuffer *tb) {
@ -344,7 +344,7 @@ void pool_push(Pool *s, int idx) {
for (int i=0; i<POOL_MAX_QUEUES; i++) {
PoolQueue *c = &s->queues[i];
if (!c->inited) continue;
pthread_mutex_lock(&c->lock);
if (((c->head+1) % c->num) == c->tail) {
// queue is full. skip for now

@ -98,7 +98,7 @@ extern "C" FramebufferState* framebuffer_init(
assert(success);
printf("egl version %d.%d\n", s->egl_major, s->egl_minor);
EGLint num_configs;
success = eglChooseConfig(s->display, attribs, &s->config, 1, &num_configs);
assert(success);

@ -138,7 +138,7 @@ int main(int argc, char *argv[]) {
}
if (sm.updated("cameraOdometry")){
localizer.handle_log(sm["cameraOdometry"]);
}
}
}
return 0;
}

@ -15,7 +15,7 @@ public:
int LogFrame(uint64_t ts, const uint8_t *y_ptr, const uint8_t *u_ptr, const uint8_t *v_ptr, int *frame_segment) {
std::lock_guard<std::recursive_mutex> guard(lock);
if (opening) {
Open(next_path);
opening = false;

@ -729,7 +729,7 @@ int main(int argc, char** argv) {
for (auto s : socks){
delete s;
}
delete poller;
delete s.ctx;
return 0;

@ -44,7 +44,7 @@ void* live_thread(void *arg) {
while (!do_exit) {
if (sm.update(10) > 0){
auto extrinsic_matrix = sm["liveCalibration"].getLiveCalibration().getExtrinsicMatrix();
Eigen::Matrix<float, 3, 4> extrinsic_matrix_eigen;
for (int i = 0; i < 4*3; i++){

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf5043514cf5b79912e54da6550f8a1bf3f378644827154c47ea7fd31de4093a
size 24549
oid sha256:d967ffcacdd6c1262ca9769e464976cf772320be1bf1cae2b3c8cd8717cfb7a0
size 24541

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5c0fcfcf3c7a136900d9a02b57596dc3e78b6bd33feaf3e4ee9c7b5f7702603b
size 1818
oid sha256:1ad38feef90e767d5edf723f38dc65666149ebd6692923c8c8aa80a9328b2845
size 1816

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:79b7f818b5609c5caa83fae3922a3f4126c4abd3e9a1752e11b18e2e55ed338a
size 9608
oid sha256:6bf800f2584a7320bd99d2fee3a1868e194913d47ac51d6ecdd6e0399c0c945f
size 9604

@ -81,7 +81,7 @@ assert(font >= 0);
float lineh;
nvgTextMetrics(vg, NULL, NULL, &lineh);
// nvgTextBox strips leading whitespace. We have to reimplement
char * next = strtok(text, "\n");
while (next != NULL){

@ -18,7 +18,7 @@ static uint32_t read24be(const uint8_t* ptr) {
}
static void write32le(FILE *of, uint32_t v) {
uint8_t va[4] = {
v & 0xff, (v >> 8) & 0xff, (v >> 16) & 0xff, (v >> 24) & 0xff
v & 0xff, (v >> 8) & 0xff, (v >> 16) & 0xff, (v >> 24) & 0xff
};
fwrite(va, 1, sizeof(va), of);
}
@ -135,7 +135,7 @@ static void hevc_index(const uint8_t *data, size_t file_size, FILE *of_prefix, F
bs_get(&bs, 1);
}
uint32_t slice_type = bs_ue(&bs);
// write the index
write32le(of_index, slice_type);
write32le(of_index, ptr - data);
@ -244,7 +244,7 @@ static void h264_index(const uint8_t *data, size_t file_size, FILE *of_prefix, F
uint32_t pic_parameter_set_id = bs_ue(&bs);
uint32_t frame_num = bs_get(&bs, sps_log2_max_frame_num_minus4+4);
if (first_mb_in_slice == 0) {
write32le(of_index, slice_type);
write32le(of_index, ptr - data);

@ -71,7 +71,7 @@ LogReader::LogReader(const QString& file, Events *events_, QReadWriteLock* event
while (1) {
mergeEvents(cdled.get());
}
});
});
}
void LogReader::mergeEvents(int dled) {

@ -4,7 +4,7 @@
#include <capnp/dynamic.h>
#include <capnp/schema.h>
// include the dynamic struct
// include the dynamic struct
#include "cereal/gen/cpp/car.capnp.c++"
#include "cereal/gen/cpp/log.capnp.c++"
@ -24,7 +24,7 @@ static inline uint64_t nanos_since_boot() {
}
Unlogger::Unlogger(Events *events_, QReadWriteLock* events_lock_, QMap<int, FrameReader*> *frs_, int seek)
Unlogger::Unlogger(Events *events_, QReadWriteLock* events_lock_, QMap<int, FrameReader*> *frs_, int seek)
: events(events_), events_lock(events_lock_), frs(frs_) {
ctx = Context::create();
YAML::Node service_list = YAML::LoadFile("../../cereal/service_list.yaml");

@ -44,7 +44,7 @@ class Window : public QWidget {
QMap<int, LogReader*> lrs;
QMap<int, FrameReader*> frs;
// cache the bar
QPixmap *px = NULL;
@ -72,7 +72,7 @@ Window::Window(QString route_, int seek, int use_api_) : route(route_), use_api(
file.open(QIODevice::ReadOnly | QIODevice::Text);
settings = file.readAll();
file.close();
QJsonDocument sd = QJsonDocument::fromJson(settings.toUtf8());
qWarning() << sd.isNull(); // <- print false :)
QJsonObject sett2 = sd.object();
@ -97,7 +97,7 @@ bool Window::addSegment(int i) {
lrs.insert(i, new LogReader(fn, &events, &events_lock, &unlogger->eidx));
} else {
QString log_fn = this->log_paths.at(i).toString();
lrs.insert(i, new LogReader(log_fn, &events, &events_lock, &unlogger->eidx));
lrs.insert(i, new LogReader(log_fn, &events, &events_lock, &unlogger->eidx));
}
@ -114,8 +114,8 @@ bool Window::addSegment(int i) {
QString camera_fn = this->camera_paths.at(i).toString();
frs.insert(i, new FrameReader(qPrintable(camera_fn)));
}
return true;
}
return false;
@ -193,9 +193,9 @@ void Window::paintEvent(QPaintEvent *event) {
tt.drawLine(lt, 300-lvv, rt, 300-vv);
if (enabled) {
tt.setPen(Qt::green);
tt.setPen(Qt::green);
} else {
tt.setPen(Qt::blue);
tt.setPen(Qt::blue);
}
tt.drawLine(rt, 300, rt, 600);
@ -237,7 +237,7 @@ int main(int argc, char *argv[]) {
QApplication app(argc, argv);
QString route(argv[1]);
int use_api = QString::compare(QString("use_api"), route, Qt::CaseInsensitive) == 0;
int seek = QString(argv[2]).toInt();
printf("seek: %d\n", seek);
@ -251,7 +251,7 @@ int main(int argc, char *argv[]) {
}
Window window(route, seek, use_api);
window.resize(1920, 800);
window.setWindowTitle("nui unlogger");
window.show();

@ -9,4 +9,3 @@ if [ $# -gt 0 ]; then
else
echo "Please Enter a Route"
fi

@ -10,24 +10,24 @@ git clone https://github.com/commaai/openpilot.git
# Add export PYTHONPATH=$HOME/openpilot to your bashrc
# Have a working tensorflow+keras in python3.7.3 (with [packages] in openpilot/Pipfile)
```
## Install (in tab 1)
## Install (in tab 1)
```
cd ~/openpilot/tools/sim
./start_carla.sh # install CARLA 0.9.7 and start the server
```
## openpilot (in tab 2)
## openpilot (in tab 2)
```
cd ~/openpilot/selfdrive/
PASSIVE=0 NOBOARD=1 ./manager.py
```
## bridge (in tab 3)
## bridge (in tab 3)
```
# links carla to openpilot, will "start the car" according to manager
cd ~/openpilot/tools/sim
./bridge.py
```
## Controls
Now you can control the simulator with the keys:
## Controls
Now you can control the simulator with the keys:
1: Cruise up 5 mph

@ -15,4 +15,4 @@ if [ ! -d carla ]; then
fi
cd carla
./CarlaUE4.sh
./CarlaUE4.sh

@ -1,49 +1,49 @@
Run openpilot with webcam on PC/laptop
=====================
What's needed:
- Ubuntu 16.04
- Python 3.7.3
- GPU (recommended)
- Two USB webcams, at least 720p and 78 degrees FOV (e.g. Logitech C920/C615)
- [Car harness](https://comma.ai/shop/products/comma-car-harness) w/ black panda (or the outdated grey panda/giraffe combo) to connect to your car
- [Panda paw](https://comma.ai/shop/products/panda-paw) (or USB-A to USB-A cable) to connect panda to your computer
- Tape, Charger, ...
That's it!
What's needed:
- Ubuntu 16.04
- Python 3.7.3
- GPU (recommended)
- Two USB webcams, at least 720p and 78 degrees FOV (e.g. Logitech C920/C615)
- [Car harness](https://comma.ai/shop/products/comma-car-harness) w/ black panda (or the outdated grey panda/giraffe combo) to connect to your car
- [Panda paw](https://comma.ai/shop/products/panda-paw) (or USB-A to USB-A cable) to connect panda to your computer
- Tape, Charger, ...
That's it!
## Clone openpilot and install the requirements
## Clone openpilot and install the requirements
```
cd ~
git clone https://github.com/commaai/openpilot.git
cd ~
git clone https://github.com/commaai/openpilot.git
```
- Follow [this readme](https://github.com/commaai/openpilot/tree/master/tools) to install the requirements
- Add line "export PYTHONPATH=$HOME/openpilot" to your ~/.bashrc
- You also need to install tensorflow 2.2 and nvidia drivers: nvidia-xxx/cuda10.0/cudnn7.6.5
- Install [OpenCL Driver](http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/15532/l_opencl_p_18.1.0.015.tgz)
- (Note: the code assumes cl platforms order to be 0.GPU/1.CPU when running clinfo; if reverse, change the -1 to -2 in selfdrive/modeld/modeld.cc#L130; helping us refactor this mess is encouraged)
- Install [OpenCV4](https://www.pyimagesearch.com/2018/08/15/how-to-install-opencv-4-on-ubuntu/) (ignore the Python part)
- Follow [this readme](https://github.com/commaai/openpilot/tree/master/tools) to install the requirements
- Add line "export PYTHONPATH=$HOME/openpilot" to your ~/.bashrc
- You also need to install tensorflow 2.2 and nvidia drivers: nvidia-xxx/cuda10.0/cudnn7.6.5
- Install [OpenCL Driver](http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/15532/l_opencl_p_18.1.0.015.tgz)
- (Note: the code assumes cl platforms order to be 0.GPU/1.CPU when running clinfo; if reverse, change the -1 to -2 in selfdrive/modeld/modeld.cc#L130; helping us refactor this mess is encouraged)
- Install [OpenCV4](https://www.pyimagesearch.com/2018/08/15/how-to-install-opencv-4-on-ubuntu/) (ignore the Python part)
## Build openpilot for webcam
## Build openpilot for webcam
```
cd ~/openpilot
cd ~/openpilot
```
- check out selfdrive/camerad/cameras/camera_webcam.cc line72&146 before building if any camera is upside down
- check out selfdrive/camerad/cameras/camera_webcam.cc line72&146 before building if any camera is upside down
```
scons use_webcam=1
touch prebuilt
scons use_webcam=1
touch prebuilt
```
## Connect the hardwares
- Connect the road facing camera first, then the driver facing camera
- (default indexes are 1 and 2; can be modified in selfdrive/camerad/cameras/camera_webcam.cc)
- Connect your computer to panda
## Connect the hardwares
- Connect the road facing camera first, then the driver facing camera
- (default indexes are 1 and 2; can be modified in selfdrive/camerad/cameras/camera_webcam.cc)
- Connect your computer to panda
## GO
## GO
```
cd ~/openpilot/tools/webcam
./accept_terms.py # accept the user terms so that thermald can detect the car started
cd ~/openpilot/selfdrive
PASSIVE=0 NOSENSOR=1 WEBCAM=1 ./manager.py
cd ~/openpilot/tools/webcam
./accept_terms.py # accept the user terms so that thermald can detect the car started
cd ~/openpilot/selfdrive
PASSIVE=0 NOSENSOR=1 WEBCAM=1 ./manager.py
```
- Start the car, then the UI should show the road webcam's view
- Adjust and secure the webcams (you can run tools/webcam/front_mount_helper.py to help mount the driver camera)
- Finish calibration and engage!
- Start the car, then the UI should show the road webcam's view
- Adjust and secure the webcams (you can run tools/webcam/front_mount_helper.py to help mount the driver camera)
- Finish calibration and engage!

Loading…
Cancel
Save