Volume 16 Number 4
August 2019
Article Contents
Bing-Xing Wu, Suat Utku Ay and Ahmed Abdel-Rahim. Pedestrian Height Estimation and 3D Reconstruction Using Pixel-resolution Mapping Method Without Special Patterns. International Journal of Automation and Computing, vol. 16, no. 4, pp. 449-461, 2019. doi: 10.1007/s11633-019-1170-2
Cite as: Bing-Xing Wu, Suat Utku Ay and Ahmed Abdel-Rahim. Pedestrian Height Estimation and 3D Reconstruction Using Pixel-resolution Mapping Method Without Special Patterns. International Journal of Automation and Computing, vol. 16, no. 4, pp. 449-461, 2019. doi: 10.1007/s11633-019-1170-2

Pedestrian Height Estimation and 3D Reconstruction Using Pixel-resolution Mapping Method Without Special Patterns

Author Biography:
  • Bing-Xing Wu received the B. Eng. degree in electrical engineering from Shanghai University of Electric Power, China in 2012, the M. Eng. and Ph. D. degrees in electrical engineering from University of Idaho, USA in 2014 and 2018, respectively. He is currently a lecturer and electric microscope expert in Department of Electrical and Computer Engineering, University of Idaho, USA. His research interests include CMOS image sensor design, digital image processing, machine vision, and vision based traffic detection. E-mail: bingxing@uidaho.edu; wu1130@vandals.uidaho.edu ORCID iD: 0000-0001-8738-2344

    Suat Utku Ay received the M. Sc. and Ph. D. degrees in electrical engineering from the University of Southern California (USC), USA in 1997 and 2005, respectively. His Ph. D. thesis involved designing large format scientific CMOS image sensors for space applications. From September 1997 to July 2007, he was working in the industry as VLSI design engineer specializing in the area of mixed-signal very large scale integration (VLSI) design and CMOS image sensors. He was with Photobit Corporation which later became the Micron Technology Inc.′s Imaging Division in 2001 and Aptina Imaging in 2008 and On Semiconductor in 2015. He joined the Department of Electrical and Computer Engineering, University of Idaho, USA, on August 2007 as an assistant professor and become an associate professor in 2013. He is a member of the IEEE Solid State Circuits, IEEE Circuits and Systems, IEEE Electron Devices, and Society of Photo-optical Instrumentation (SPIE) societies. His research interests include VLSI analog and mixed-signal integrated circuit (IC) design techniques for new class of baseband and radio frequency (RF) circuits and systems, on intelligent sensor systems with emphasis of reconfigurable, secure, flexible electro-optical circuit and devices, and on self-sustained and smart CMOS sensors for remote wireless network and systems. E-mail: suatay@uidaho.edu (Corresponding author) ORCID iD: 0000-0001-7640-4253

    Ahmed Abdel-Rahim is a professor in the Civil & Environmental Engineering Department at the University of Idaho, USA and the director of the University′s National Institute for Advanced Transportation Technology (NIATT), USA. He earned his doctorate in transportation engineering from Michigan State University, USA in 1998. He has published more than 50 refereed publications and has over 20 years of experience managing research projects. He is the lead principal investigator and the director of the University of Idaho′s University Transportation Center, USA that focuses on Transportation for Livability by Integrating the Vehicle and the Environment (TranLIVE). His research interests include connected vehicle applications, modeling the environmental impact of vehicle operations, intelligent transportation systems (ITS), traffic operations and control technology, traffic modeling, security and survivability of transportation networks, and highway traffic safety. E-mail: ahmed@uidaho.edu ORCID iD: 0000-0001-9756-554X 

  • Received: 2018-09-29
  • Accepted: 2019-01-01
  • Published Online: 2019-03-27
  • Extracting the three-dimensional (3D) information including location and height of a pedestrian is important for vision-based intelligent traffic monitoring systems. This paper tackles the relationship between pixels′ actual size and pixels′ spatial resolution through a new method named pixel-resolution mapping (P-RM). The proposed P-RM method derives the equations for pixels′ spatial resolutions (XY-direction) and object′s height (Z-direction) in the real world, while introducing new tilt angle and mounting height calibration methods that do not require special calibration patterns placed in the real world. Both controlled laboratory and actual world experiments were performed and reported. The tests on 3D mensuration using proposed P-RM method showed overall better than 98.7% accuracy in laboratory environments and better than 96% accuracy in real world pedestrian height estimations. The 3D reconstructed images for measured points were also determined with the proposed P-RM method which shows that the proposed method provides a general algorithm for 3D information extraction.
  • 加载中
  • [1] C. Setchell, E. L. Dagless.  Vision-based road-traffic monitoring sensor[J]. IEE Proceedings – Vision, Image and Signal Processing, 2001, 148(1): 78-84. doi: 10.1049/ip-vis:20010077
    [2] C. C. C. Pang, S. S. Xie, S. C. Wong, K. Choi. Generalized camera calibration model for trapezoidal patterns on the road. Optical Engineering, vol. 52, no. 1, Article number 017006, 2013.
    [3] A. Criminisi, I. Reid, A. Zisserman.  Single view metrology[J]. International Journal of Computer Vision, 2000, 40(2): 123-148. doi: 10.1023/A:1026598000963
    [4] Z. Zhang.  A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334. doi: 10.1109/34.888718
    [5] P. K. Sinha. Image Acquisition and Preprocessing for Machine Vision Systems, Bellingham, USA: Society of Photo-Optical Instrumentation Engineers, 2012.
    [6] L. Lee, R. Romano, G. Stein. Monitoring activities from multiple video streams: establishing a common coordinate frame. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 758–767, 2000. DOI: 10.1109/34.868678.
    [7] S. Khan, M. Shah.  Consistent labeling of tracked objects in multiple cameras with overlapping fields of view[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(10): 1355-1360. doi: 10.1109/TPAMI.2003.1233912
    [8] G. S. K. Fung, N. H. C. Yung, G. K. H. Pang.  Camera calibration from road lane markings[J]. Optical Engineering, 2003, 42(10): 2967-2977. doi: 10.1117/1.1606458
    [9] J. Shao, S. K. Zhou, R. Chellappa.  Robust height estimation of moving objects from uncalibrated videos[J]. IEEE Transactions on Image Processing, 2010, 19(8): 2221-2232. doi: 10.1109/TIP.2010.2046368
    [10] J. C. Liu, R. T. Collins, Y. X. Liu. Surveillance camera autocalibration based on pedestrian height distributions. In Proceedings of the British Machine Vision Conference, Dundee, UK, 2011.
    [11] S. W. Park, T. E. Kim, J. S. Choi. Real-time estimation of trajectories and heights of pedestrians. In Proceedings of International Conference on Information Science and Applications, IEEE, Jeju Island, South Korea, 2011.
    [12] D. Xu, H. W. Wang, Y. F. Li, M. Tan.  A new calibration method for an inertial and visual sensing system[J]. International Journal of Automation and Computing, 2012, 9(3): 299-305. doi: 10.1007/s11633-012-0648-y
    [13] H. J. Song, Y. Z. Chen, Y. Y. Gao.  Velocity calculation by automatic camera calibration based on homogenous fog weather condition[J]. International Journal of Automation and Computing, 2013, 10(2): 143-156. doi: 10.1007/s11633-013-0707-z
    [14] F. A. Andaló, G. Taubin, S. Goldenstein.  Efficient height measurements in single images based on the detection of vanishing points[J]. Computer Vision and Image Understanding, 2015, 138(2): 51-60. doi: 10.1016/j.cviu.2015.03.017
    [15] J. Jung, H. Kim, I. Yoon, J. Paik. Human height analysis using multiple uncalibrated cameras. In Proceedings of IEEE International Conference on Consumer Electronics, IEEE, Las Vegas, USA, pp. 213–214, 2016.
    [16] J. Jung, I. Yoon, S. Lee, J. Paik. Object detection and tracking-based camera calibration for normalized human height estimation. Journal of Sensors, vol. 2016, Article number 8347841, 2016.
    [17] L. Y. Xu, Z. Q. Cao, P. Zhao, C. Zhou.  A new monocular vision measurement method to estimate 3D positions of objects on floor[J]. International Journal of Automation and Computing, 2017, 14(2): 159-168. doi: 10.1007/s11633-016-1047-6
    [18] J. W. Li, W. Gao, Y. H. Wu.  Elaborate scene reconstruction with a consumer depth camera[J]. International Journal of Automation and Computing, 2018, 15(4): 443-453. doi: 10.1007/s11633-018-1114-2
    [19] B. X. Wu, S. U. Ay, A. Abdel-Rahim. Trapezoid pixel array complementary metal oxide semiconductor image sensor with simplified mapping method for traffic monitoring applications. Optical Engineering, vol. 57, no. 9, Article number 093106, 2018.
    [20] B. X. Wu, A. Abdel-Rahim, S. U. Ay. A trapezoid CMOS image sensor with 2% detection accuracy for traffic monitoring. In Proceedings of the 60th International Midwest Symposium on Circuits and Systems, IEEE, Boston, USA, pp. 1154–1158, 2017.
    [21] F. Rameau, A. Habed, C. Demonceaux, D. Sidibé, D. Fofi. Self-calibration of a PTZ camera using new LMI constraints. In Proceedings of the 11th Asian Conference on Computer Vision, Springer, Daejeon, Korea, pp. 297–308, 2012.
    [22] Y. T. Li, J. Zhang, W. W. Hu, J. W. Tian.  Method for pan-tilt camera calibration using single control point[J]. Journal of the Optical Society of America A, 2015, 32(1): 156-163. doi: 10.1364/JOSAA.32.000156
    [23] J. Nakamura. Image Sensors and Signal Processing for Digital Still Cameras, Boca Raton, USA: Taylor & Francis Group, 2006.
    [24] L. A. Klein, M. K. Mills, D. R. P. Gibson. Traffic Detector Handbook, Volume II, 3rd ed, FHWA-HRT-06-139, USDOT, Washington, USA, 2006.
    [25] A. Elgammal, R. Duraiswami, D. Harwood, L. S. Davis.  Background and foreground modeling using nonparametric kernel density estimation for visual surveillance[J]. Proceedings of the IEEE, 2002, 90(7): 1151-1163. doi: 10.1109/JPROC.2002.801448
    [26] N. Kanopoulos, N. Vasanthavada, R. L. Baker.  Design of an image edge detection filter using the Sobel operator[J]. IEEE Journal of Solid-state Circuits, 1988, 23(2): 358-367. doi: 10.1109/4.996
  • 加载中
  • [1] Viet-Anh Le, Hai-Xuan Le, Linh Nguyen, Minh-Xuan Phan. An Efficient Adaptive Hierarchical Sliding Mode Control Strategy Using Neural Networks for 3D Overhead Cranes . International Journal of Automation and Computing, 2019, 16(5): 614-627.  doi: 10.1007/s11633-019-1174-y
    [2] Viet Khanh Ha, Jin-Chang Ren, Xin-Ying Xu, Sophia Zhao, Gang Xie, Valentin Masero, Amir Hussain. Deep Learning Based Single Image Super-resolution: A Survey . International Journal of Automation and Computing, 2019, 16(4): 413-426.  doi: 10.1007/s11633-019-1183-x
    [3] Qiang Fu, Xiang-Yang Chen, Wei He. A Survey on 3D Visual Tracking of Multicopters . International Journal of Automation and Computing, 2019, 16(6): 707-719.  doi: 10.1007/s11633-019-1199-2
    [4] Xian-Xia Zhang, Zhi-Qiang Fu, Shao-Yuan Li, Tao Zou, Bing Wang. A Time/Space Separation Based 3D Fuzzy Modeling Approach for Nonlinear Spatially Distributed Systems . International Journal of Automation and Computing, 2018, 15(1): 52-65.  doi: 10.1007/s11633-017-1080-0
    [5] Mostafa El Mallahi, Jaouad El Mekkaoui, Amal Zouhri, Hicham Amakdouf, Hassan Qjidaa. Rotation Scaling and Translation Invariants of 3D Radial Shifted Legendre Moments . International Journal of Automation and Computing, 2018, 15(2): 169-180.  doi: 10.1007/s11633-017-1105-8
    [6] Fusaomi Nagata, Keigo Watanabe, Maki K. Habib. Machining Robot with Vibrational Motion and 3D Printer-like Data Interface . International Journal of Automation and Computing, 2018, 15(1): 1-12.  doi: 10.1007/s11633-017-1101-z
    [7] Mostafa El Mallahi, Amal Zouhri, Anass El Affar, Ahmed Tahiri, Hassan Qjidaa. Radial Hahn Moment Invariants for 2D and 3D Image Recognition . International Journal of Automation and Computing, 2018, 15(3): 277-289.  doi: 10.1007/s11633-017-1071-1
    [8] Yi Yang, Fan Qiu, Hao Li, Lu Zhang, Mei-Ling Wang, Meng-Yin Fu. Large-scale 3D Semantic Mapping Using Stereo Vision . International Journal of Automation and Computing, 2018, 15(2): 194-206.  doi: 10.1007/s11633-018-1118-y
    [9] Merras Mostafa, El Hazzat Soulaiman, Saaidi Abderrahim, Satori Khalid, Gadhi Nazih Abderrazak. 3D Face Reconstruction Using Images from Cameras with Varying Parameters . International Journal of Automation and Computing, 2017, 14(6): 661-671.  doi: 10.1007/s11633-016-0999-x
    [10] Hong-Kai Chen, Xiao-Guang Zhao, Shi-Ying Sun, Min Tan. PLS-CCA Heterogeneous Features Fusion-based Low-resolution Human Detection Method for Outdoor Video Surveillance . International Journal of Automation and Computing, 2017, 14(2): 136-146.  doi: 10.1007/s11633-016-1029-8
    [11] Ling-Yi Xu, Zhi-Qiang Cao, Peng Zhao, Chao Zhou. A New Monocular Vision Measurement Method to Estimate 3D Positions of Objects on Floor . International Journal of Automation and Computing, 2017, 14(2): 159-168.  doi: 10.1007/s11633-016-1047-6
    [12] Yi-Quan Song, Lei Niu, Long He, Rui Wang. A Grid-based Graph Data Model for Pedestrian Route Analysis in a Micro-spatial Environment . International Journal of Automation and Computing, 2016, 13(3): 296-304.  doi: 10.1007/s11633-016-0979-1
    [13] Wei-Hua Chen,  Yuan-Yuan Liu,  Fu-Hua Zhang,  Yong-Ze Yu,  Hai-Ping Chen,  Qing-Xi Hu. Osteochondral Integrated Scaffolds with Gradient Structure by 3D Printing Forming . International Journal of Automation and Computing, 2015, 12(2): 220-228.  doi: 10.1007/s11633-014-0853-y
    [14] Fei Yan, Yi-Sha Liu, Ji-Zhong Xiao. Path Planning in Complex 3D Environments Using a Probabilistic Roadmap Method . International Journal of Automation and Computing, 2013, 10(6): 525-533.  doi: 10.1007/s11633-013-0750-9
    [15] Xiao-Jing Zhou, Zheng-Xu Zhao. The Skin Deformation of a 3D Virtual Human . International Journal of Automation and Computing, 2009, 6(4): 344-350.  doi: 10.1007/s11633-009-0344-8
    [16] Kazuki Kozuka,  Cheng Wan,  Jun Sato. Rectification of 3D Data Obtained from Moving Range Sensor by Using Extended Projective Multiple View Geometry . International Journal of Automation and Computing, 2008, 5(3): 268-275.  doi: 10.1007/s11633-008-0268-8
    [17] Edmée Amstutz, Tomoaki Teshima, Makoto Kimura, Masaaki Mochimaru, Hideo Saito. PCA-based 3D Shape Reconstruction of Human Foot Using Multiple Viewpoint Cameras . International Journal of Automation and Computing, 2008, 5(3): 217-225.  doi: 10.1007/s11633-008-0217-6
    [18] Ming-Min Zhang,  Zhi-Geng Pan,  Li-Feng Ren,  Peng Wang. Image-based Virtual Exhibit and Its Extension to 3D . International Journal of Automation and Computing, 2007, 4(1): 18-24.  doi: 10.1007/s11633-007-0018-3
    [19] Index Conditions of Resolution . International Journal of Automation and Computing, 2005, 2(1): 52-59.  doi: 10.1007/s11633-005-0052-y
    [20] Jindong Liu,  Huosheng Hu. A 3D Simulator for Autonomous Robotic Fish . International Journal of Automation and Computing, 2004, 1(1): 42-50.  doi: 10.1007/s11633-004-0042-5
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (13)  / Tables (4)

Metrics

Abstract Views (1041) PDF downloads (32) Citations (0)

Pedestrian Height Estimation and 3D Reconstruction Using Pixel-resolution Mapping Method Without Special Patterns

Abstract: Extracting the three-dimensional (3D) information including location and height of a pedestrian is important for vision-based intelligent traffic monitoring systems. This paper tackles the relationship between pixels′ actual size and pixels′ spatial resolution through a new method named pixel-resolution mapping (P-RM). The proposed P-RM method derives the equations for pixels′ spatial resolutions (XY-direction) and object′s height (Z-direction) in the real world, while introducing new tilt angle and mounting height calibration methods that do not require special calibration patterns placed in the real world. Both controlled laboratory and actual world experiments were performed and reported. The tests on 3D mensuration using proposed P-RM method showed overall better than 98.7% accuracy in laboratory environments and better than 96% accuracy in real world pedestrian height estimations. The 3D reconstructed images for measured points were also determined with the proposed P-RM method which shows that the proposed method provides a general algorithm for 3D information extraction.

Bing-Xing Wu, Suat Utku Ay and Ahmed Abdel-Rahim. Pedestrian Height Estimation and 3D Reconstruction Using Pixel-resolution Mapping Method Without Special Patterns. International Journal of Automation and Computing, vol. 16, no. 4, pp. 449-461, 2019. doi: 10.1007/s11633-019-1170-2
Citation: Bing-Xing Wu, Suat Utku Ay and Ahmed Abdel-Rahim. Pedestrian Height Estimation and 3D Reconstruction Using Pixel-resolution Mapping Method Without Special Patterns. International Journal of Automation and Computing, vol. 16, no. 4, pp. 449-461, 2019. doi: 10.1007/s11633-019-1170-2
    • Vision systems are being widely used in security, surveillance, and traffic detection applications due to ease of installation and maintenance[1, 2]. These systems are often required to have the ability to extract real-time parameters from the observed objects. For instance, it is important to retrieve human features such as location, moving speed and height in real-time for visual based pedestrian surveillance[3]. Accordingly, it is essential to determine the relationship between the two-dimensional (2D) image captured by the camera and the three-dimensional (3D) features of human in the field of view (FOV) of the camera. This is essential to extract the required pedestrian information. However, existing methods used today requires some form of camera calibration.

      Camera calibration is used for determining intrinsic and extrinsic parameters of the camera system including geometric and optical characteristics of the camera, to complete 3D reconstruction of captured scenes[35]. Various methods with different camera calibration approaches have been proposed[618]. These could be classified into two categories; single view with vanishing points/lines based and multiple camera based methods.

      Vanishing points and vanishing lines (plane) which provide information on the direction of lines and orientation of planes were extracted in [810, 1416] to recover the image with affine properties of the perspective structure that determine the relationship between 2D image coordinates and 3D world coordinates. The length of straight lines could be estimated with the reference height of a certain object in the scene. Although these are mature methods, the information in the image, which does not include much geometric cues, may not be enough to generate vanishing points or vanishing lines. In [13], the flat background extracted from video frames was used as a vanishing plane for calibration with knowing the height of camera ahead. The calibration method was basically from Fung′s model[8]. In [11], a rectangular marker object with known size was used on the reference plane instead of generating a vanishing point or lines, which introduced the back-projection method to obtain the height information. In [17], a chessboard was required for the calibration before starting 3D position estimation by a monocular vision system. Those techniques require a specific pattern in the 3D world which is not always suitable for real world applications.

      Multiple camera based methods generate 3D reconstructed images from aligned multiple cameras[6, 7] or depth cameras[12, 18] that look into a single planar coordinate while using approximate values of intrinsic camera parameters. Multiple camera based methods give a larger surveillance area than that of a single view camera. However, it requires extra computation power for calibrating whole camera coordinates with respect to the common reference plane. Typically, the ground plane is used as a reference plane, which is nonlinear, sensitive to noises, and environmental variations. Further, this method requires iterative computational procedures, expensive moving camera systems, and tedious setup of several cameras.

      In [19, 20], a unique low resolution complementary metal oxide semiconductor (CMOS) image sensor with trapezoid pixel array (TZOID) was proposed, which was designed according to pixel′s resolution (spatial resolution) mapping (PR-M) method for vehicle position and speed detection. This method shows the relationship be- tween pixel′s size and related spatial resolution directly with testing results on measurements of vehicle′s location, 2D size and speed.

      In this paper, a height estimation method based on PR-M method of monocular vision system is proposed, which is an alternative way of reconstructing the scene without the disadvantages of the existing methods. It is assuming that the potential target object, human, is either walking or standing on the ground plane, and is perpendicular to the ground. Additionally, three parameters must be known for the pixel-resolution mapping (P-RM) method to work, including size and number of pixels in the image sensor, and the focal length of lens used in the camera system. Using these parameters, the proposed P-RM method finds the relationship between the sensor′s 2D pixels and 3D world pixels′ resolution directly, which gives the cue on detecting the pedestrian′s height. It provides straight forward features extraction without using iterative procedures, thus it is computationally less complex.

      The rest of paper is organized as follows. Section 2 describes the camera models on which the proposed method is developed. This section explains basic assumptions and geometries used in the method. Section 3 explains the proposed pixel-resolution mapping (P-RM) method for location and height estimation. Testing procedures and measurement results are presented in Section 4. The conclusions and future work are given in Section 5. Detailed mathematical derivations are included in Appendix.

    • A basic camera model, Fung′s model that is based on the pin-hole model, was illustrated in [8]. The camera is mounted at a certain height, which is the perpendicular distance from the center of the camera lens to the XY-plane, and looking down with certain tilt angle (ϕ), pan angle (θ), swing angle (ψ), and the camera lens has a focal length of f. Pan-tilt cameras, which have ability to rotate around Xc-axis for tilting and Zc-axis for panning with fixed swing angle 0°, are commonly used in surveillance[21, 22]. Consequently, the proposed P-RM method is designed by considering the swing angle of 0°, and the center of the imager is on the optical axis. According to these preconditions, a modified camera model including pixel array is transformed as shown in Fig. 1. The origin of world coordinates is at the projection point of the optical center on the world plane (XW-YW). The YW-axis is overlapping on the projection of optical axis on the world plane. Optical center, Oc, is at the world coordinates of (0, 0, H), where H is the height of the camera. Different from traditional pin-hole camera model, the modified model is essentially showing the relationship between the size of each pixel and related spatial resolution.

      Figure 1.  Modified camera model for P-RM method

      Fig. 1 shows that the detection region of the camera is fixed as the size of image sensor, focal length of the lens, and mounting position are determined. The distance from origin to point A is minimum detection distance, dymin, and the distance from origin to point B is maximum detection distance, dymax. The distance between A and B gives the detection range of the image sensor. The image sensor is usually 2D rectangular shape composed of a number of square size pixels (number of pixels in row, M, times number of pixels in column, N) that are specifically designed and fabricated by integrated circuit (IC) industry[23]. Because of the tilted sensor plane in the camera, each pixel experiences its own spatial resolution on both x and y directions. The resolution of pixels may vary from row to row or from column to column, which depends on mounting orientation, lens type, and pixel size. Indeed, the parameters for pedestrian monitoring such as location, height and moving speed of human, could be extracted from the captured image if each pixel′s resolution is known. Furthermore, the connection between an image captured by a mounted camera and real-world scenery could be explained as the pixel′s relationship with pixels′ actual resolution in scenery is known. As a result, the conventional 3D reconstruction problem by monocular vision could be transformed into determining pixel′s resolution, which leads to the development of the P-RM method.

    • The proposed P-RM method, which could reconstruct the image for mensuration in pedestrian monitoring applications, works by identifying each pixel′s resolution on the image plane. In other words, the actual size of the 3D world image could be recovered by each pixel′s 2D spatial resolution. A diagrammatic procedure of P-RM method is shown in Fig. 2, which intuitively illustrates the concept of this method. The object in the field of view is captured by a camera generating a square shape raw image as shown in the middle of Fig. 2. The distortion caused by 3D perspective projection in the original 3D world image will be restored by using the resolution of each pixel. Consequently, it is important to know each pixel′s resolutions for P-RM method, which gives the cue for location and speed detection and introduces to height estimation for a person.

      Figure 2.  Diagrammatic procedure to reconstruct an image by P-RM method

    • The resolution of each pixel consists of two parts: resolution on row (Xi) direction and resolution on column (Yi) direction, which are marked as Rx and Ry, respectively. Before continuing explaining the Rx and Ry as shown in Figs. 3 and 4, the following conditions are assumed. The camera is mounted at height, H, and tilted with ϕ. The camera lens has focal length of f. The pixel size is described as lpy and lpx on column direction and row direction, respectively. The size of image sensor is Lix by Liy, which is the total length of a row multiplied by the total width of a column. The total number of pixels is N by M including N rows and M columns. The projection of optical center toward image sensor plane is assumed at the geometric center of pixels arrays, and the distortion from the lens is ignored.

      Figure 3.  Side view of modified camera model for determining pixel resolution in Yi direction, Ry

      Figure 4.  Geometry for modeling pixel resolution in Xi direction, Rx

    • Side view geometry of modified camera model is shown in Fig. 3. It illustrates the model of pixel resolution in Yi direction, Ry, and achievable maximum and minimum detection distances. Point A is the closest, and point B is the furthest side that the sensor could view. Pixels at the bottom of the pixel array [row (M–1)] observe the furthest area (reaching up to point B), dymax that the sensor could detect, while the closest region (point A), dymin, could be detected by the pixels at top of the array [row(0)]. The distance between farthest and closest detection line is the detection range, dyrange, which is determined by (1).

      $\begin{split} d{y_{range}} =\; & d{y_{\max }} - d{y_{\min }} = \\ &\left( {\tan \left( {\varPhi + {\rm{ta{n}}^{ - 1}}\left( {\displaystyle\frac{{{l_{py}} \times N}}{{2f}}} \right)} \right) - } \right.\\ & \left. {\tan \left( {\varPhi - {\rm{ta{n}}^{ - 1}}\left( {\displaystyle\frac{{{l_{py}} \times N}}{{2f}}} \right)} \right)} \right) \times H. \end{split}$

      (1)

      The resolution in Yi direction of a pixel at row(n), Ry(n), is determined using similar approach, but instead of pixels at bottom and top array, the bottom and top edge of each pixel is used. The detailed derivation of the equation is given in Appendix, while the results are given in (2).

    • Fig. 4 shows the geometry for getting the resolution in Xi direction, Rx, which is considered as Rx(n) for the pixels at row(n). There are two triangles, ΔabOc and ΔABOc, which are similar triangles. Rx is determined by these two triangles, which give the basic equation for deriving Rx(n) in (3). The details of the derivation are listed in Appendix.

    • The height estimation method is based on the pixel′s spatial resolution in column direction, Ry. The location of the pedestrian could be determined first by using (2) after detecting the human′s foot. Because of the precondition that assumes the human is perpendicular to the ground plane, the geometry for pedestrian height analysis is shown in Fig. 5 including two similar triangles, ΔCOOc and ΔABO, which introduces equation on human height, h, extraction as expressed in (4).

      Figure 5.  Geometry for pedestrian height estimation, h

      Equations (2) and (3) give the general expressions of a pixel′s spatial resolutions in a camera, and (4) showing the expression on estimating the height directly, all of which could be determined if N, lpy, lpx, f, ϕ, and H are known. First three parameters could be obtained from the commercial image sensor′s datasheet that includes size and number of pixels in the sensor. Focal length, f, could also be determined from specifications of commercial off-the-shelf lens. H and ϕ are depending on the mounting situation of the camera, which are not difficult to be measured by a laser altimeter and digital clinometer. However, camera systems used for traffic applications are typically mounted on a traffic light pole at 8.84 m (29 ft) above the ground[24]. Further, the tilt angle and height could be calibrated by the P-RM method in case ϕ or H is not available.

    • The tilt angle, ϕ, might be difficult to determine as it could change when the environmental parameters are changed such as time of day, temperature, vibration due to wind, etc. Thus, it is necessary to get the parameter, ϕ, calibrated periodically. The proposed P-RM method achieves this simply by capturing two images without requiring special patterns or specific correction or calibration blocks.

      $\begin{split} {R_{y(n)}} = & \tan \left[ {\varPhi - {\rm{ta{n}}^{ - 1}}\left( {\dfrac{{{l_{py}} \times N}}{{2f}}} \right) + {\rm{si{n}}^{ - 1}}\left( {\dfrac{{(n + 1) \times {l_{py}} \times \sin \left( {{\rm{ta{n}}^{ - 1}}\left( {\dfrac{{2f}}{{{l_{py}} \times N}}} \right)} \right)}}{{\sqrt {{{\left( {\dfrac{{{l_{py}} \times N}}{2} - (n + 1) \times {l_{py}}} \right)}^2} + {f^2}} }}} \right)} \right] \times H -\\ & { \tan \left[ {\varPhi - {\rm{ta{n}}^{ - 1}}\left( {\dfrac{{{l_{py}} \times N}}{{2f}}} \right) + {\rm{si{n}}^{ - 1}}\left( {\dfrac{{n \times {l_{py}} \times \sin \left( {{\rm{ta{n}}^{ - 1}}\left( {\dfrac{{2f}}{{{l_{py}} \times N}}} \right)} \right)}}{{\sqrt {{{\left( {\dfrac{{{l_{py}} \times N}}{2} - n \times {l_{py}}} \right)}^2} + {f^2}} }}} \right)} \right] \times H.} \end{split}$

      (2)

      ${R_{x(n)}} = \dfrac{{\sqrt {1 + {{\left( {\tan \left( {\Phi - {\rm{ta{n}}^{ - 1}}\left( {\dfrac{{{l_{py}} \times N}}{{2f}}} \right) +{\rm{ si{n}}^{ - 1}}\left( {\dfrac{{(n + \dfrac{1}{2}) \times {l_{py}} \times \sin \left( {{\rm{ta{n}}^{ - 1}}\left( {\dfrac{{2f}}{{{l_{py}}\times N}}} \right)} \right)}}{{\sqrt {{{\left( {\dfrac{{{l_{py}} \times N}}{2} - (n + \dfrac{1}{2}) \times {l_{py}}} \right)}^2} + {f^2}} }}} \right)} \right)} \right)}^2}} }}{{\sqrt {{f^2} + {{\left( {\dfrac{{{L_{iy}}}}{2} - (n + \dfrac{1}{2}) \times {l_{py}}} \right)}^2}} }} \times H \times {l_{px}}.$

      (3)

      The best calibration target is a person himself or herself in a pedestrian monitoring application. The height of a moving person will not change as its location is changed. For instance, a person in white T-shirt shown in Fig. 6 could be used as a calibration target, the height h, could be measured with (4), where dy(B) = dy(BottomEdge) and dy(C) = dy(TopEdge). Consequently, hA and hB in terms of H and ϕ, in frame nA and frame nB could be determined. Then, a trigonometric equation with one unknown variable, ϕ (0° < 90° ), is solved after setting lA equal to lB, so that the tilt angle could be computed.

      Figure 6.  Two images capturing a walking person for camera′s tilting angle calibration

    • The mounting height could be achieved easily by a known height object as reference in the field of view. For instance, a human whose height is known could be used as reference target for mounting height calibration, which is quite common in pedestrian height estimation application. The mounting height, H, could be derived as following:

      $H = {h_{ref}} \times \left( {\dfrac{{d{y_{(TopEdge)}}}}{{d{y_{(TopEdge)}} - d{y_{{(BottomEdge)}}}}}} \right) $

      (5)

      where href is the height of reference object.

    • The proposed method was tested by using standard rectangular image sensors. A Cannon EOS 100D Rebel SL1 camera equipped with APS-C format (22.3 mm × 14.9 mm) CMOS image sensor was used in the experiment, which has 4.31 μm pixels. A Canon EF 40 mm f/2.8 lens was also selected. Measurements were carried out in both controlled laboratory environment and outdoor settings.

    • First, tilt angle and mounting height calibration was performed determining errors under indoor laboratory environment settings. Subsequently, dimensions of a standard chessboard pattern which was perpendicular to the ground were determined by P-RM method.

    • The tilt angle could be calibrated by capturing a moving object and by using (4). The camera was mounted at 296 mm high (H = 0.296 m) and 2.5 M pixels mode (1 920 × 1 280) was used as reference. The value of tilt angel measured by clinometer was used as truth value, which was 40.10°.

      A wooden cube with height of 53.88 mm was used as a moving object for the tilt angle calibration test, which was moving from P0 to P17 as shown in Fig. 7(a). Any two images from those two positions could be set as a pair for tilt angle calibration, so that a total 153 test pairs could be arranged. Root mean square (RMS) value of calibrated tilt angle was determined after repeating measurements for each test pair a hundred times. The experiment results are shown in Fig. 7(b). Standard deviation for whole values of calibrated tilt angle is 0.232° with 40.141° RMS value, which is 0.041° (0.11% error) larger than the truth value, 40.10°, though the maximum tested error is –1.47%.

      Figure 7.  Tilt angle calibration testing

    • The height of the wooden cube could be used as reference height, href, for mounting height, H, calculated by (5). The 18 test positions (P0 to P17) were also used for mounting height calibration test. One hundred images were captured at each position and used for gathering RMS values of mounting height determined at each test position. Testing results are shown in Fig. 8, illustrating the mounting height calibration accuracy. Better than 99.6% with maximum –0.38% error at P0 was achieved. Standard deviation of total 1 800 values of calibrated height, H, is 0.729 mm with 296.08 mm RMS, which is 0.08 mm larger than the true value of 296 mm (0.024% error). The calibration results for indoor testing are listed in Table 1.

      Figure 8.  Achieved mounting height with related error in mounting height calibration testing

      ResultTilt angleMounting height
      Average value40.140296 mm
      Average error0.11%0.024%

      Table 1.  Calibration results for indoor laboratory tests

    • Two wooden cubes were placed on the ground plane which was covered by a printed chessboard pattern sheet as shown in Fig. 9(a). The PR-M method was used for feature extraction by substituting tilt angle and mounting height values. As a result, the pixels′ spatial resolutions in both column and row directions (Ry, Rx) at ground plane could be found by (1) and (2), which are plotted in Fig. 9(c). In Fig. 9(b), a total of 18 end points with 25 lengths were labeled and determined by using edge detection for verifying proposed mensuration accuracy. The RMS value of computed length and computed error of each test line from one hundred times static repeat images are listed in Table 2. The results show that the proposed method produced both accurate 2D length and 3D height estimation with mean accuracy of 99.47% and 99.30%, respectively, in addition the overall accuracy is better than 98.7%.

      Figure 9.  Indoor experiment on feature extraction

      LineEnd pointActual length (mm)Computed length (mm)ErrorAccuracy
      I1ab113.5113.810.27%99.73%
      I2bc28.127.95–0.53%99.47%
      I3cd113.0113.450.40%99.60%
      I4da27.927.960.22%99.78%
      I5fe28.028.350.89%99.11%
      I6fg114.5114.500.00%100.00%
      I7hi18.919.111.10%98.90%
      I8ij28.628.35–0.86%99.14%
      I9jk18.918.67–1.35%98.76%
      I10kh28.628.600.02%99.98%
      I11fm111.8111.27–0.47%99.53%
      I12rm130.0129.16–0.65%99.35%
      I13mo64.063.85–0.23%99.77%
      I14op170.0170.690.41%99.59%
      I15pq160.0161.440.90%99.10%
      I16qr170.0169.51–0.29%99.71%
      I17ro160.0159.36–0.40%99.60%
      I18oq233.45232.17–0.55%99.45%
      I19rp233.45234.710.54%99.46%
      h1af18.818.61–1.02%98.98%
      h2de18.918.65–1.30%98.70%
      h3bg18.918.61–1.55%98.45%
      h4hm53.853.810.02%99.98%
      h5kl53.953.84–0.12%99.88%
      h6in53.753.800.19%99.81%

      Table 2.  Extracted lengths and heights for indoor test

      The bird′s eye view of two cubes′ edges in Fig. 9(a) is reconstructed in Fig. 9(d) which includes reconstructed edge points with corner point coordinates and a restored ground plane/background image with correct dimensions. Additionally, 3D views of reconstructed scenes are generated as shown in Figs. 9 (e) and 9 (f) including 3D reconstruction of the camera location, two wooden cubes positions, and ground plane.

      Gaussian noise, the standard deviation of which is from 0 to 4 pixels, was considered for the edge detection measurements. This was used to investigate the performance of the P-RM method in ill-conditions, such as blurry image, soft focus areas, and unstable edge detection algorithms.

      Fig. 10 shows the noise test results. The error of computed tilt angles and error of computed mounting height versus added noises is plotted in Fig. 10 (a). It means that 4 pixels uncertainty will introduce maximum 2.8% error to tilt angle and 1% error of mounting height calculations. Computed mounting height error versus error of computed tilt angle is shown in Fig. 10 (b). 2D measurement and height estimation errors versus tilt angle calibrating and mounting height calibrating errors are given in Figs. 10 (c) and 10 (d), respectively. The error of 2D measuring is proportional to the error of mounting height since the estimated object height is proportional to the camera′s mounting height as given in (4). The accuracy of 2D mensuration strongly depends on tilt angle. On the other hand, the height estimation is less sensitive on the accuracy of tilt angle calibration.

      Figure 10.  Noise testing results

    • Cannon camera equipped with 40 mm f/2.8 lens using 2M pixels mode (1 920 × 1 080) was used in the outdoor experiment. The camera was mounted at 8.378 m (true value) high above the ground with 52.50° (true value) tilt angle viewing a lobby hall. Tilt angle and mounting height calibration test and dimension mensuration (pedestrian height estimation) test were both applied.

      Three student volunteers labeled as PA (1.590 m), PB (1.840 m) and PC (1.745 m) were walking or standing in the FOV of the camera as shown in example frames in Figs. 11 (a) and 11 (b). The volunteers were playing the roles of the reference target for calibration and detection targets for height estimation. The feet edge (bottom edge) and head edge (top edge) of the volunteers were acquired by background subtraction using kernel density estimation method[25] with Sobel edge detection operator[26] as shown in Figs. 11 (c) and 11 (d), which were essential for calibration and dimension information extraction.

      Figure 11.  Sample frames from outdoor test

    • The three volunteers were regarded as calibration target and reference height objects successively for tilt angle and mounting height calibration, as described before.

      Twenty pairs of frames were randomly selected for computing the tilt angle value depending on reference targets of PA, PB and PC, respectively. The average values of computed tilt angles depending on PA, PB and PC, which were respectively used in the mounting height calculation test, are listed in Table 3. The initial testing results on tilt angles were shown in Fig. 12(a) indicating that the maximum computed error for the whole test is 2.36% with an average value of 52.83° (0.63% error), the standard deviation of which is 0.46°.

      Figure 12.  Calibration test results

      TargetComputed mean ϕ (°)Actual ϕ (°)AccuracyComputed mean H (m)Actual H (m)Accuracy
      1PA52.76852.5099.49%8.4078.37899.65%
      2PB52.80552.5099.42%8.4178.37899.53%
      3PC52.91352.5099.21%8.3628.37899.81%
      Overall/52.82752.5099.38%8.3958.37899.79%

      Table 3.  Computed tilt angle, ϕ, and mounting height, H, for outdoor test

      Twenty frames including three volunteers were processed for mounting height calibration test after tilt angle calibration. The testing results are exhibited in Fig. 12(b). The average value of calibrated mounting height is 8.395 m, which has 0.21% error from the true value with 0.079 m standard deviation, though the maximum error of 2.26% occurs at frame# 321.

      The tilt angle and mounting height calibration results are summarized in Table 3, which gives the mean computed values and computed accuracies of each volunteers as calibration target. The proposed method produced accurate calibration with mean accuracy of around 99.38% and overall accuracy of greater than 97.64%.

    • The previous computed values, 52.83° and 8.395 m, were used as values of tilt angle and mounting height for pedestrian height estimation. The three volunteers were randomly walking in the FOV as the camera was capturing for height information extraction.

      As shown in Figs. 11 (c) and 11 (d), the profile of the volunteers was extracted with top/head and bottom/foot edges detection of the volunteers. Consequently, the height of the person could be estimated by the proposed PR-M method. The examples of 3D reconstructed images for Figs. 11 (a) and 11 (b) are exhibited in Figs. 11 (e) and 11 (f), in addition to Fig. 13, which demonstrates the experimental results of height estimation on three volunteers in one hundred different frames. The summarized estimation results are listed in Table 4, which shows that the overall accuracy on height estimation could achieve to better than 96%.

      Figure 13.  Pedestrian height estimation results on three volunteers from one hundred different frames

      VolunteerActual height (m)Estimated height mean (m)Standard deviation (m)Error meanError maxAccuracy
      1PA1.5901.6080.0221.15%3.31%96.69%
      2PB1.8401.8520.0210.63%2.98%97.02%
      3PC1.7451.7630.0250.77%3.48%96.52%

      Table 4.  Height estimation results for outdoor test

      The proposed PR-M method calibrated mounting information correctly, and recovered the 3D information from a 2D image accurately with 96% detection accuracy. Note that smaller pixel size, increasing number of total pixels in the array, and more elegant image processing algorithms could generate more accuracy detection results and yield more robust 3D reconstructions.

    • In this work, the relationship between a pixel′s physical size and related spatial resolution is illustrated based on the P-RM method, which generates the model for 2D digital imager and related 3D scene. The proposed work could achieve height information without specific patterns or multiple cameras (or moving cameras) in addition to acquiring 2D features in the world scene. In other words, 3D information could be detected properly by the P-RM method. A commercial camera was used as a pan-tilt camera to verify the efficiency of the PR-M method. Both tilt angle and mounting height of the camera were calibrated correctly in lab and outdoor experiments. Three-dimensional mensuration for objects and persons tests were performed, which show better than 98.7% accuracy in lab tests and better than 96% accuracy in actual pedestrian height estimation tests. Furthermore, reconstructed 3D images from detected points were successively obtained by the proposed work.

      Additionally, a unique customized CMOS image sensor was fabricated based on P-RM method that also gave the generalized pixel design equations for designing a multi-resolution pixel array for less unimportant data, high speed, and low power with controllable distribution of pixel′s resolution[19]. The calibration and speed detection for traffic application (2D features) was also simplified by the P-RM method with better than 97% accuracy, which could be utilized efficiently on both standard rectangular array and customized array as verified in the experiments[19, 20].

      In conclusion, the pedestrian position and height could be estimated correctly using the P-RM method without special patterns, which is flexible for 2D monocular digital image sensors. The P-RM method could be embedded in a CMOS image sensor that is specifically designed for vehicle and pedestrian detection with less computation power and high speed. Consequently, the future work for this research is to integrate the whole algorithm of P-RM method to a single CMOS image sensor, next generation of TZOID[19, 20], implanted pixel merging function for smart pedestrian and vehicle detection.

    • According to Fig. 3, dymax and dymin could be acquired based on trigonometric function as shown in (A1) and (A2), respectively, where ω is depending on the sensor size and focus length, which could be written as (A3).

      $d{y_{\max }} = \tan (\omega + {\varphi _{\max }}) \times H \tag{A1}$

      (A1)

      $d{y_{\min }} = \tan (\omega ) \times H \tag{A2}$

      (A2)

      $\omega = \phi - \arctan \left(\dfrac{{\dfrac{{L_{iy}}}{2}}}{f}\right)\tag{A3}.$

      (A3)

      The parameter Liy in (A3) is total length of a pixel array, which equals each pixel′s length times total number of rows for consist pixel size array (Liy = lpy × N).

      In (A1), φmax could be derived by following steps:

      $\dfrac{{\sin {\varphi _{\max }}}}{{{L_{iy}}}} = \dfrac{{\sin \varTheta }}{{\sqrt {{{\left( {\dfrac{{{L_{iy}}}}{2}-{L_{iy}}} \right)}^2} + {f^2}} }}\tag{A4}$

      (A4)

      $\sin {\varphi _{\max }} = {L_{iy}} \times \dfrac{{\sin \varTheta }}{{\sqrt {{{\left( {\dfrac{{{L_{iy}}}}{2}{\rm{ - }}{L_{iy}}} \right)}^2} + {f^2}} }}\tag{A5}$

      (A5)

      ${\varphi _{\max }} = {\rm{si{n}}^{ - 1}}\left( {\dfrac{{\sin \left( {{\rm{ta{n}}^{ - 1}}\left( {\dfrac{{2f}}{{{L_{iy}}}}} \right)} \right)}}{{\sqrt {{{\left( {\dfrac{1}{2}} \right)}^2} + {{\left( {\dfrac{f}{{{L_{iy}}}}} \right)}^2}} }}} \right)\tag{A6}$

      (A6)

      where Θ = tan–1$\left(\dfrac{2f}{{L}_{iy}}\right)$ so that φmax could be found as (A6).

      Detection range dyrange is the distance between dymax and dymin, which could be achieved by substituting (A3) and (A6) into (A1) and (A2), as shown in (A7).

      $\begin{split} d{y_{range}} & = d{y_{\max }} - d{y_{\min }}=\\ & \left(\tan \left( {\phi + {\rm{ta{n}}^{ - 1}}\left( {\dfrac{{{l_{py}} \times N}}{{2f}}} \right)} \right)\right.- \\ & \left. \tan \left( {\phi - {\rm{ta{n}}^{ - 1}}\left( {\dfrac{{{l_{py}} \times N}}{{2f}}} \right)} \right)\right) \times H \end{split}\tag{A7}.$

      (A7)

      The resolution in Yi direction of a pixel at row(n), Ry(n), could be achieved using a similar approach, but instead of pixels at bottom and top array, the bottom and top edge of each pixel is used, which introduces

      ${R_{y(n)}} = d{y_{(n + 1)}} - d{y_{(n)}}\tag{A8}$

      (A8)

      and dy(n) and dy(n+1) could be achieved by

      $d{y_{(n)}} \!=\! \tan \left( \!{{\rm{\omega }} \!+\! {{\sin }^{ - 1}}\left(\! {\dfrac{{{L_{iy(n)}} \times \sin \left(\! \varTheta \! \right)}}{{\sqrt {{{\left(\! {\dfrac{{{L_{iy}}}}{2} \!-\! {L_{iy(n)}}} \!\right)}^2} \!+\! {f^2}} }}} \!\right)}\! \right) \times {{H}}\tag{A9}$

      (A9)

      and

      $d{y_{(n + 1)}} \!=\! \tan\! \left(\! {{\rm{\omega }} \!+\! {{\sin }^{ - 1}}\!\left(\! {\dfrac{{{L_{iy(n + 1)}} \!\times\! \sin \left(\! \varTheta \!\right)}}{{\sqrt {{{\left(\! {\dfrac{{{L_{iy}}}}{2} \!-\! {L_{iy(n \!+\! 1)}}} \!\right)}^2} \!+\! {f^2}} }}} \!\right)} \!\right) \!\times \!{{H}}\tag{A10}$

      (A10)

      where Liy(n) and Liy(n+1), are the length from the current pixel′s top edge to top edge of the pixel array, and length from the current pixel′s bottom edge to top edge of the pixel array bottom, respectively. For same size pixel array, Liy(n) = (n) × lpy and Liy(n+1) = (n+1) × lpy. Consequently, the equation for Ry(n) could be obtained by substituting (A9) and (A10) into (A8), where Θ = tan–1$\left(\dfrac{2f}{{L}_{iy}}\right)$.

      Geometry for getting the resolution in Xi direction, Rx, which is considered as Rx(n) for the pixels at row(n), is shown in Fig. 4. In Fig. 4, there are two triangles, ΔabOc and ΔABOc, which are similar triangles. Rx could be determined by these two triangles, which give the basic equation for deriving Rx(n) as shown in (A11). It should be noticed that the median position of a pixel $\left(n+\dfrac{1}{2}\right)$ is considered for related spatial resolution.

      In (A11), dy$\left(n+\dfrac{1}{2}\right)$ and f$\left(n+\dfrac{1}{2}\right)$ could be determined by (A12) and (A13), respectively. By substituting (A12) and (A13) into (A11), Rx(n) could be eventually obtained as shown in (3).

      ${R_{x(n)}} = dy_{(n + {\textstyle{1 \over 2}})}'\dfrac{{{l_{px}}}}{{{f'}{{_{(n + {\textstyle{1 \over 2}})}}^{}}}}\tag{A11}$

      (A11)

      $d{y'_{(n + {\textstyle{1 \over 2}})}} = \sqrt {{H^2} + {{\left(d{y_{(n + {\textstyle{1 \over 2}})}}\right)}^2}}\tag{A12}$

      (A12)

      ${f'}_{(n + {\textstyle{1 \over 2}})} = \sqrt {{f^2} + {{\left(\dfrac{{{L_{iy}}}}{2} - (n + {\textstyle{1 \over 2}}) \times {l_{py}}\right)}^2}}\tag{A13}.$

      (A13)
Reference (26)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return