the below is the error I faced,the training code I used is:
!python3 /content/model_main.py \--pipeline_config_path=/content/ssd_mobilenet_v2_coco.config \--model_dir=training/
error:
Traceback (most recent call last): File "/content/model_main.py", line 109, in <module> tf.app.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "/content/model_main.py", line 71, in main FLAGS.sample_1_of_n_eval_on_train_examples)) File "/content/drive/My Drive/object_detection/models/research/object_detection/model_lib.py", line 617, in create_estimator_and_inputs pipeline_config_path, config_override=config_override) File "/content/drive/My Drive/object_detection/models/research/object_detection/utils/config_util.py", line 104, in get_configs_from_pipeline_file text_format.Merge(proto_str, pipeline_config) File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 693, in Merge allow_unknown_field=allow_unknown_field) File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 760, in MergeLines return parser.MergeLines(lines, message) File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 785, in MergeLines self._ParseOrMerge(lines, message) File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 807, in _ParseOrMerge self._MergeField(tokenizer, message) File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 875, in _MergeField name = tokenizer.ConsumeIdentifierOrNumber() File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 1343, in ConsumeIdentifierOrNumber raise self.ParseError('Expected identifier or number, got %s.' % result)google.protobuf.text_format.ParseError: 2:1 : '%%writefile {model_pipline}': Expected identifier or number, got %.
my config coding:
%%writefile object_detection/samples/configs/ssd_mobilenet_v2_coco.configmodel {ssd { num_classes: 14 # number of classes to be detected box_coder { faster_rcnn_box_coder { y_scale: 10.0 x_scale: 10.0 height_scale: 5.0 width_scale: 5.0 } } matcher { argmax_matcher { matched_threshold: 0.5 unmatched_threshold: 0.5 ignore_thresholds: false negatives_lower_than_unmatched: true force_match_for_each_row: true } } similarity_calculator { iou_similarity { } } anchor_generator { ssd_anchor_generator { num_layers: 6 min_scale: 0.2 max_scale: 0.95 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.3333 } } # all images will be resized to the below W x H. image_resizer { fixed_shape_resizer { height: 300 width: 300 } } box_predictor { convolutional_box_predictor { min_depth: 0 max_depth: 0 num_layers_before_predictor: 0 #use_dropout: false use_dropout: true # to counter over fitting. you can also try tweaking its probability below dropout_keep_probability: 0.8 kernel_size: 1 box_code_size: 4 apply_sigmoid_to_scores: false conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { # weight: 0.00004 weight: 0.001 # higher regularizition to counter overfitting } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.9997, epsilon: 0.001, } } } } feature_extractor { type: 'ssd_mobilenet_v2' min_depth: 16 depth_multiplier: 1.0 conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { # weight: 0.00004 weight: 0.001 # higher regularizition to counter overfitting } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.9997, epsilon: 0.001, } } } loss { classification_loss { weighted_sigmoid { } } localization_loss { weighted_smooth_l1 { } } hard_example_miner { num_hard_examples: 3000 iou_threshold: 0.95 loss_type: CLASSIFICATION max_negatives_per_positive: 3 min_negatives_per_image: 3 } classification_weight: 1.0 localization_weight: 1.0 } normalize_loss_by_num_matches: true post_processing { batch_non_max_suppression { score_threshold: 1e-8 iou_threshold: 0.6 #adjust this to the max number of objects per class. # ex, in my case, i have one pistol in most of the images. # . there are some images with more than one up to 16. max_detections_per_class: 100 # max number of detections among all classes. I have 1 class only so max_total_detections: 300 } score_converter: SIGMOID } }}train_config: { batch_size: 16 # training batch size optimizer { rms_prop_optimizer: { learning_rate: { exponential_decay_learning_rate { initial_learning_rate: 0.003 decay_steps: 800720 decay_factor: 0.95 } } momentum_optimizer_value: 0.9 decay: 0.9 epsilon: 1.0 } } #the path to the pretrained model. fine_tune_checkpoint: "/content/drive/My Drive/object_detection/models/research/pretrained_model/model.ckpt" fine_tune_checkpoint_type: "detection" # Note: The below line limits the training process to 200K steps, which we # empirically found to be sufficient enough to train the pets dataset. This # effectively bypasses the learning rate schedule (the learning rate will # never decay). Remove the below line to train indefinitely. num_steps: 200000 #data augmentaion is done here, you can remove or add more. # They will help the model generalize but the training time will increase greatly by using more data augmentation. # Check this link to add more image augmentation: https://github.com/tensorflow/models/blob/master/research/object_detection/protos/preprocessor.proto data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { random_adjust_contrast { } } data_augmentation_options { ssd_random_crop { } }}train_input_reader: { tf_record_input_reader { #path to the training TFRecord input_path: "/content/drive/My Drive/object_detection/data/train_labels.record" } #path to the label map label_map_path: "/content/drive/My Drive/object_detection/data/label_map.pbtxt"}eval_config: { # the number of images in your "testing" data (was 600 but we removed one above :) ) num_examples: 6 # the number of images to disply in Tensorboard while training num_visualizations: 20 # Note: The below line limits the evaluation process to 10 evaluations. # Remove the below line to evaluate indefinitely. #max_evals: 10}eval_input_reader: { tf_record_input_reader { #path to the testing TFRecord input_path: "/content/drive/My Drive/object_detection/data/test_labels.record" } #path to the label map label_map_path: "/content/drive/My Drive/object_detection/data/label_map.pbtxt" shuffle: false num_readers: 1}
Someone please help me! I face this error when running training code, I dont know where is going wrong, is it the problem of the quotations mark? How should I fix it?the model_main.py and config file are uploaded in colab, i faced the problem also when i mount to gdrive but for now is temporary solved. Actually I just want to post this error, but system want me to add some more details, so can ignore these sentences. I am newbie in this field, thank you in advance!