This repository has been archived on 2020-04-08. You can view files and clone it, but cannot push or open issues or pull requests.
Files
IPIN2018/tex_review/review/02_review.txt
2018-10-21 14:29:44 +02:00

62 lines
7.7 KiB
Plaintext

Dear Reviewer,
-> At first we would like to start with a short overview over all changes. The individual answers follow directly after this text. All additions to the text are highlighted in blue. Words or text passages suggested by the reviewers to be removed are highlighted in red. As you will see, abstract and introduction are completely revised to better highlight the novel contributions. We were able to implement many suggestions of the reviewers. The transition was highly extended, to achieve a better understanding of the method. We have also added a detailed description of how and by what means our system is installed in a building. This also leads to a better description of the experimental setup.
We added a complete new section, evaluating the activity recognition. Additionally, you will find many smaller changes and addition throughout the paper as well as further improvements of the writing. In the following our answers are marked with "->".
Overall:
From my point of view, it is not clear that this paper represents a novel contribution. Almost all the bases and formulation have been presented in your previous paper (IPIN 2016, FUSION 2016, ISPRS International Journal of Geo-Information 2017). Only the KDL optimization and a the trials in a new environment are novels, and they obtained better results due to the floor adaptation of the WAF model.
-> To clarify the contributions of this work, we revised several major sections of the paper, especially the abstract and the introduction. The receive a fast overview, we added a listing starting from line 89. We hope that this will help to provide a better overview.
Detailed comments:
line 187: The smoothing Monte Carlo filter is the same of you proposed in IPIN 2016 conference? It is not clear why you are referring it as Condensation. Are you using any concrete Condensation implementation (OpenCV, matlab,...), and this is the explanation?.
Condensation filter is used in the field of visual tracking due to the researchers do not access to the agent information. In your case you have access to the phone sensors, therefore the concept is a Monte Carlo Localization with transitions detection based on steps and orientation detection.
Using a MCL you do not need to mix observations and actions in the same concept (eq 3.), you should divide into observations and transitions.
From my point of view formulation of transition model T should be tackled using actions (steps) and observations (s_wifi) should be used like an observation model V (described like "Evaluation" in section 5).
-> Within this work we do not incorporate a smoothing step, as smoothing requires high computational power. The basic sequential Monte Carlo scheme (particle filter) is however the same as in all our previous works. As we have a pattern recognition background, we often refer to the CONDENSATION algorithm instead of equivalent methods such as the bootstrap particle filter. Despite the fact, that CONDENSATION is often used within the field of visual tracking, it is based on the same assumptions as bootstrap or MCL, and thus equivalent. All three are assuming that the proposal distribution (sometimes called importance distribution), of the general sampling importance resampling (SIR) particle filter, is the state transition p(q_t | q_t-1). In order to incorporate previous observations, we extend the transition to p(q_t | q_t-1, o_t-1), incorporating previous observations. The validity of this statement can be easily proven.
We are not sure, what you exactly mean by "researchers do not access to the agent information". At the end, sensor data as well as preprocessed data (e.g. the activity) can be seen as part of the observation. The MCL you are referring to, is highly adapted to control theory and thus robot motion. However, we maintain a more probabilistic view of particle filtering, allowing for a more general formulation. Of course, it would also be possible to introduce some control (actions) command, as for example Sebastian Thrun did in his work "Probabilistic Robotics". However, this results in the same probability density, as the control (actions) would have the same data as the observation mentioned earlier. We highly appreciate your suggestion of reformulating the particle filtering, despite that we refrain from pursuing it. Finally, we do not use any out-of-the-box implementation from OpenCV or matlab. As stated at the beginning of our experiments, we developed a C++ backend for localization, running on both desktop and smartphone.
line 226: What is z_t? Is an observation o_t? nomenclature should be unified through the whole paper
-> We completely changed the formulation within the transition chapter. z_t is the z-component at time t, which belongs to the state q_t. It is given by the triangles of the navigation mesh, which denote the buildings floor (see eq. 4). z_t stays constant, as long as the floor is flat and only changes when stairs are involved.
line 319: Are these thresholds able for all of pedestrians? have you tried with different actors and behaviors?
-> Thank you very much for this suggestion. Based on it, we have added an evaluation of the activity recognition to the experiments, see chapter 7.4. We used the same measurement series (walk 0 - 3) as before. It was recorded by 4 different testers using 3 different devices.
line 410: Why 10.000 samples in the building? Should it be dependent of the building size, wifi noise, etc...?
-> Thanks for pointing that out, this was a misformulation and badly explained. We revised the paragraph accordingly. Please see line 502 to 519.
line 480: In line 410 you propose 10.000 particles and in the experiments propose 1.000, why?
-> As mentioned before, this was a bad formulation. In addition the 1.000 particles is a typo, as we deployed 5000 particles. Thanks again.
line 508: results shown in Figure 3 are not clear presented, from my point of view the proposal seems to be worse than the previous one, there are much more outliers (blue color)
-> We are sorry, but did you perhaps look at the wrong side of the graphic? On the left, our newly proposed method is presented and on the right the older graph-based solution can be seen. In our opinion, it is clearly visible that the mesh produces more continues results in both a) and c) as well as a far better estimation after 180 steps (c). In contrast the discrete structure of the graph can be seen easily, as the particles seem to stand in line and file. We added some hints to the description, as this might clarify.
line 512: typo "prober"
-> Fixed. We also improved the overall writing and spelling throughout the paper.
Figure 5: It is not clear connections between ground floor and first floor, is there any typo or figures are misplaced?
-> Thank you very much for pointing that out. As the building had various construction measures since the 13th century, its architecture is rather hard to visualize. We revised the figure (now figure 8) by adding numbers to the stairs involved. We hope this gives a better understanding of the buildings respective floors.
Figure 6: You use the expression "Monte Carlo", are you referring to Condensation?
-> Yes, we are referring to a single run of the CONDENSATION particle filter. We fixed it accordingly.
Results section: results and comments are ad-hoc for this environment, and it is not demonstrated that could be applied in a more general context.
-> We incorporated this suggestion into the conclusion section of this work. Please see line 961 to line 959. However, we have decided not to discuss the general usage of this approach in the experiments, since this work explicitly deals with a historical building.
-> Again, thank you very much for your time and the detailed suggestions. They further improved this work.