Search results for: measurement errors
971 Causes of Deteriorations of Flexible Pavement, Its Condition Rating and Maintenance
Authors: Pooja Kherudkar, Namdeo Hedaoo
Abstract:
There are various causes for asphalt pavement distresses which can develop prematurely or with aging in services. These causes are not limited to aging of bitumen binder but include poor quality materials and construction, inadequate mix design, inadequate pavement structure design considering the traffic and lack of preventive maintenance. There is physical evidence available for each type of pavement distress. Distress in asphalt pavements can be categorized in different distress modes like fracture (cracking and spalling), distortion (permanent deformation and slippage), and disintegration (raveling and potholes). This study shows the importance of severity determination of distresses for the selection of appropriate preventive maintenance treatment. Distress analysis of the deteriorated roads was carried out. Four roads of urban flexible pavements from Pune city was selected as a case study. The roads were surveyed to detect the types, to measure the severity and extent of the distresses. Causes of distresses were investigated. The pavement condition rating values of the roads were calculated. These ranges of ratings were as follows; 1 for poor condition road, 1.1 to 2 for fair condition road and 2.1 to 3 for good condition road. Out of the four roads, two roads were found to be in fair condition and the other two were found in good condition. From the various preventive maintenance treatments like crack seal, fog seal, slurry seal, microsurfacing, surface dressing and thin hot mix/cold mix bituminous overlays, the effective maintenance treatments with respect to the surface condition and severity levels of the existing pavement were recommended.Keywords: distress analysis, pavement condition rating, preventive maintenance treatments, surface distress measurement
Procedia PDF Downloads 198970 The Clinical Use of Ahmed Valve Implant as an Aqueous Shunt for Control of Uveitic Glaucoma in Dogs
Authors: Khaled M. Ali, M. A. Abdel-Hamid, Ayman A. Mostafa
Abstract:
Objective: Safety and efficacy of Ahmed glaucoma valve implantation for the management of uveitis induced glaucoma evaluated on the five dogs with uncontrollable glaucoma. Materials and Methods: Ahmed Glaucoma Valve (AGV®; New World Medical, Rancho Cucamonga, CA, USA) is a flow restrictive, non-obstructive self-regulating valve system. Preoperative ocular evaluation included direct ophthalmoscopy and measurement of the intraocular pressure (IOP). The implant was examined and primed prior to implantation. The selected site of the valve implantation was the superior quadrant between the superior and lateral rectus muscles. A fornix-based incision was made through the conjunectiva and Tenon’s capsule. A pocket is formed by blunt dissection of Tenon’s capsule from the episclera. The body of the implant was inserted into the pocket with the leading edge of the device around 8-10 mm from the limbus. Results: No post operative complications were detected in the operated eyes except a persistent corneal edema occupied the upper half of the cornea in one case. Hyphaema was very mild and seen only in two cases which resolved quickly two days after surgery. Endoscopical evaluation for the operated eyes revealed a normal ocular fundus with clearly visible optic papilla, tapetum and retinal blood vessels. No evidence of hemorrhage, infection, adhesions or retinal abnormalities was detected. Conclusion: Ahmed glaucoma valve is safe and effective implant for treatment of uveitic glaucoma in dogs.Keywords: Ahmed valve, endoscopy, glaucoma, ocular fundus
Procedia PDF Downloads 587969 Impact of Paint Occupational Exposure on Reproductive Markers: A Case Study in North East Algeria
Authors: Amina Merghad, Cherif Abdennour
Abstract:
Solvents are widely used in paint industry, where humans are highly exposed, especially from inhalation. A case report describes how paint affects reproductive markers and the health of workers. Sixty four subjects were chosen and divided into two groups; a control and an exposed group. A questionnaire was given to male workers from similar socio-economic status in order to know their ages, working conditions, clinical symptoms, working period, smoking history, shift, medical history and nutrition. Blood was withdrawn in the morning from volunteers. The measurement of blood testosterone and prolactin concentrations was then carried out. Results showed that the ages of the two groups were almost similar and were up to 47 and 43 years. The period of employment was 17 years and 14 years for the control and the exposed workers, respectively. Concerning clinical symptoms, the frequency of neuropsychological symptoms of the two groups are presented. It is clear that the symptom of memory loss, headaches are the highest among exposed workers followed by poor coordination, poor concentration and insomnia. On the other hand, the symptoms’ frequency in the control was less than that of the exposed group. Testosterone concentration has significantly decreased in group 2 (4.61±2,005 ng/ml) and group 3 (4.25±1.67 ng/ml) of exposed workers. On the other hand, prolactin concentration was higher in group 3 compared to other groups. To conclude, paint industry has disturbed reproductive markers and created high frequency of neuropsychological symptoms.Keywords: blood, paint, prolactin, occupational exposure, organic solvent, reproductive toxicity, testosterone
Procedia PDF Downloads 366968 Carotid Intima-Media Thickness and Ankle-Brachial Index as Predictors of the Severity of Coronary Artery Disease
Authors: Ali Kassem, Yaser Kamal, Mohamed Abdel Wahab, Mohamed Hussen
Abstract:
Introduction: Atherosclerosis is one of the leading causes of death all over the world. Recently, there is an increasing interest in Carotid Intima-Medial Thickness (CIMT) and Ankle Brachial Index (ABI) as non-invasive tools for identifying subclinical atherosclerosis. We aim to examine the role of CIMT and ABI as predictors of the severity of angiographically documented coronary artery disease (CAD). Methods: A cross-sectional study conducted on 60 patients who were investigated by coronary angiography at Sohag University Hospital, Egypt. CIMT: After the carotid arteries were located by transverse scans, the probe was rotated 90 ° to obtain and record longitudinal images of bilateral carotid arteries ABI: Each patient was evaluated in the supine position after resting for 5 min. ABI was measured in each leg using a Doppler Ultrasound while the patient remained in the same position. The lowest ABI obtained for either leg was taken as the ABI measurement for the patient. Results: Patients with carotid mean IMT ≥ 0.9 mm had significantly more severe coronary artery disease than patients without thickening (mean IMT > 0.9 mm). Similarly, patients with low ABI (< 0.9) had significantly more severe coronary artery disease than patients with ABI ≥ 0.9. When the patients were divided into 4 groups (group A, n = 15, mean IMT < 0.9 mm, ABI ≥ 0.9; group B, n = 25, mean IMT < 0.9 mm, low ABI; group C, n = 5, mean IMT ≥ 0.9 mm, ABI ≥ 0.9; group D, n = 19, mean IMT ≤ 0.9 mm, low ABI), the presence of significant coronary stenosis (> 50%) of the groups were significantly different (group A, n = 5: (33.3%); group B, n = 11: (52.4%); group C, n = 4: (60%); group D, n=15, (78.9%), P = 0.001). Conclusion: CIMT and ABI provide useful information on the severity of CAD. Early and aggressive intervention should be considered in patients with CAD and abnormalities in one or both of these non-invasive modalities.Keywords: ankle brachial index, carotid intima media thickness, coronary artery disease, predictors of severity
Procedia PDF Downloads 232967 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery
Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko
Abstract:
In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics
Procedia PDF Downloads 293966 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats
Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi
Abstract:
Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.Keywords: conjugated linoleic acid, high fat diet, L-Carnitine, obesity
Procedia PDF Downloads 157965 A Robust Spatial Feature Extraction Method for Facial Expression Recognition
Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda
Abstract:
This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure
Procedia PDF Downloads 425964 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code
Procedia PDF Downloads 412963 The Effects of Menstrual Phase on Upper and Lower Body Anaerobic Performance in College-Aged Women
Authors: Kelsey Scanlon
Abstract:
Introduction: With the rate of female collegiate and professional athletes on the rise in recent decades, fluctuations in physical performance in relation to the menstrual cycle is an important area of study. PURPOSE: The purpose of this research was to compare differences in upper and lower body maximal anaerobic capacities across a single menstrual cycle. Methode: Participants (n=11) met a total of four times; once for familiarization and again on day 1 of menses (follicular phase), day 14 (ovulation), and day 21 (luteal phase) respectively. Upper body power was assessed using a bench press weight of ~50% of the participant’s predetermined 1-repetition maximum (1-RM) on a ballistic measurement system and variables included peak force (N), mean force (N), peak power (W), mean power (W), and peak velocity (m/s). Lower body power output was collected using a standard Wingate test. The variables of interest were anaerobic capacity (w/kg), peak power (W), mean power (W), fatigue index (W/s), and total work (J). Result: Statistical significance was not observed (p > 0.05) in any of the aforementioned variables after completing multiple one ways of analyses of variances (ANOVAs) with repeated measures on time. Conclusion: Within the parameters of this research, neither female upper nor lower body power output differed across the menstrual cycle when analyzed using 50% of one repetition (1RM) maximal bench press and the 30-second maximal effort cycle ergometer Wingate test. Therefore, researchers should not alter their subject populations due to the incorrect assumption that power output may be influenced by the menstrual cycle.Keywords: anaerobic, athlete, female, power
Procedia PDF Downloads 146962 Triose Phosphate Utilisation at the (Sub)Foliar Scale Is Modulated by Whole-plant Source-sink Ratios and Nitrogen Budgets in Rice
Authors: Zhenxiang Zhou
Abstract:
The triose phosphate utilisation (TPU) limitation to leaf photosynthesis is a biochemical process concerning the sub-foliar carbon sink-source (im)balance, in which photorespiration-associated amino acids exports provide an additional outlet for carbon and increases leaf photosynthetic rate. However, whether this process is regulated by whole-plant sink-source relations and nitrogen budgets remains unclear. We address this question by model analyses of gas-exchange data measured on leaves at three growth stages of rice plants grown at two-nitrogen levels, where three means (leaf-colour modification, adaxial vs abaxial measurements, and panicle pruning) were explored to alter source-sink ratios. Higher specific leaf nitrogen (SLN) resulted in higher rates of TPU and also led to the TPU limitation occurring at a lower intercellular CO2 concentration. Photorespiratory nitrogen assimilation was greater in higher-nitrogen leaves but became smaller in cases associated with yellower-leaf modification, abaxial measurement, or panicle pruning. The feedback inhibition of panicle pruning on rates of TPU was not always observed because panicle pruning blocked nitrogen remobilisation from leaves to grains, and the increased SLN masked the feedback inhibition. The (sub)foliar TPU limitation can be modulated by whole-plant source-sink ratios and nitrogen budgets during rice grain filling, suggesting a close link between sub-foliar and whole-plant sink limitations.Keywords: triose phosphate utilization, sink limitation, panicle pruning, oryza sativa
Procedia PDF Downloads 90961 Geochemical and Geostructural Characteristics of the Groundwater System and the Role of Faults in Groundwater Movement at the Hammamet Basin, Tebessa Area (Northeast of Algeria)
Authors: Iklass Hamaili, Fehdi Chemseddine
Abstract:
Morphostructural, hydrogeological and hydrochemical approaches were applied in this study to characterize the groundwater system of Hammamet Plain, Eastern part of Algeria and its potential for exploitation. The analysis of the fractures in several Mountains forming the natural boundaries of Hammamet plain, with faults of markedly different sizes and joints measured at 21 stations, demonstrate the presence of two principal directions of fractures (NNW-SSE and NNE-SSW). From a hydrogeological standpoint, these two mountains constitute a unit limited by faults-oriented ENE-WSW, NNW-SSE and NNE-SSW. Specifically, fractures of the latter two directions influence the compartmentalization and the hydrogeological functioning of this unit. According to the degree of fracturing and/or karstification, two basic types of aquiferous behavior have been distinguished: fissured aquifer (Essen Mountain and Troubia Mountain), and porous aquifer (Hammamet basin). After sampling and measurement operations, the quantity of chemical components was determined. Thus, the study of the hydrochemical characteristics of this groundwater shows on Piper’s diagram that the majority of them are mainly HCO₃- and Ca₂+ water types. The ionic speciation and mineral dissolution/precipitation were calculated by PHREEQC package software. The chemical composition of the water is influenced by the dissolution and/or precipitation processes during the water-rock interaction and by the cationic exchange reactions between groundwater and alluvial sediments. The high content of CO₂ in the water samples suggests that they circulate in a geochemical opened system.Keywords: aquifer, hydrogeology, hydrochemistry, Hammamet, Tebessa, Algeria
Procedia PDF Downloads 18960 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 10959 Undergraduates Learning Preferences: A Comparison of Science, Technology and Social Science Academic Disciplines in Relations to Teaching Designs and Strategies
Authors: Salina Budin, Shaira Ismail
Abstract:
Students learn effectively in a learning environment with a suitable teaching approach that matches their learning preferences. The main objective of the study is to examine the learning preferences amongst the students in the Science and Technology (S&T), and Social Science (SS) fields of study at the Universiti Teknologi Mara (UiTM), Pulau Pinang. The measurement instrument is based on the Dunn and Dunn Learning Styles which measure five elements of learning styles; environmental, sociological, emotional, physiological and psychological. Questionnaires are distributed amongst undergraduates in the Faculty of Mechanical Engineering and Faculty of Business Management. The respondents comprise of 131 diploma students of the Faculty of Mechanical Engineering and 111 degree students of the Faculty of Business Management. The results indicate that, both S&T and SS students share a similar learning preferences on the environmental aspect, emotional preferences, motivational level, learning responsibility, persistent level in learning and learning structure. Most of the S&T students are concluded as analytical learners and the majority of SS students are global learners. Both S&T and SS students are concluded as visual learners, preferred to be in an active mobility in a relaxing and enjoying mode with some light of refreshments during the learning process and exhibited reflective characteristics in learning. Obviously, the S&T students are considered as left brain dominant, whereas the SS students are right brain dominant. The findings highlighted that both categories of students exhibited similar learning preferences except on psychological preferences.Keywords: learning preferences, Dunn and Dunn learning style, teaching approach, science and technology, social science
Procedia PDF Downloads 245958 Influence of Specimen Geometry (10*10*40), (12*12*60) and (5*20*120), on Determination of Toughness of Concrete Measurement of Critical Stress Intensity Factor: A Comparative Study
Authors: M. Benzerara, B. Redjel, B. Kebaili
Abstract:
The cracking of the concrete is a more crucial problem with the development of the complex structures related to technological progress. The projections in the knowledge of the breaking process make it possible today for better prevention of the risk of the fracture. The breaking strength brutal of a quasi-fragile material like the concrete called Toughness is measured by a breaking value of the factor of the intensity of the constraints K1C for which the crack is propagated, it is an intrinsic property of the material. Many studies reported in the literature treating of the concrete were carried out on specimens which are in fact inadequate compared to the intrinsic characteristic to identify. We started from this established fact, in order to compare the evolution of the parameter of toughness K1C measured by calling upon ordinary concrete specimens of three prismatic geometries different (10*10*40) Cm3, (12*12*60) Cm3 & (5*20*120) Cm3 containing from the side notches various depths simulating of the cracks was set up.The notches are carried out using triangular pyramidal plates into manufactured out of sheet coated placed at the center of the specimens at the time of the casting, then withdrawn to leave the trace of a crack. The tests are carried out in 3 points bending test in mode 1 of fracture, by using the techniques of mechanical fracture. The evolution of the parameter of toughness K1C measured with the three geometries specimens gives almost the same results. They are acceptable and return in the beach of the results determined by various researchers (toughness of the ordinary concrete turns to the turn of the 1 MPa √m). These results inform us about the presence of an economy on the level of the geometry specimen (5*20*120) Cm3, therefore, to use plates specimens later if one wants to master the toughness of this material complexes, astonishing but always essential that is the concrete.Keywords: concrete, fissure, specimen, toughness
Procedia PDF Downloads 298957 Study of Age-Dependent Changes of Peripheral Blood Leukocytes Apoptotic Properties
Authors: Anahit Hakobjanyan, Zdenka Navratilova, Gabriela Strakova, Martin Petrek
Abstract:
Aging has a suppressive influence on human immune cells. Apoptosis may play important role in age-dependent immunosuppression and lymphopenia. Prevention of apoptosis may be promoted by BCL2-dependent and BCL2-independent manner. BCL2 is an antiapoptotic factor that has an antioxidative role by locating the glutathione at mitochondria and repressing oxidative stress. STAT3 may suppress apoptosis in BCL2-independent manner and promote cell survival blocking cytochrome-c release and reducing ROS production. The aim of our study was to estimate the influence of aging on BCL2-dependent and BCL2-independent prevention of apoptosis via measurement of BCL2 and STAT3 mRNAs expressions. The study was done on Armenian population (2 groups: 37 healthy young (mean age±SE; min/max age, male/female: 37.6±1.1; 20/54, 15/22), 28 healthy aged (66.7±1.5; 57/85, 12/16)). mRNA expression in peripheral blood leukocytes (PBL) was determined by RT-PCR using PSMB2 as the reference gene. Statistical analysis was done with Graph-Pad Prism 5; P < 0.05 considered as significant. The expression of BCL2 mRNA was lower in aged group (0.199) compared with young ones (0.643)(p < 0.01). Decrease expression was also recorded for female and male subgroups (p < 0.01). The expression level of STAT3 mRNA was increased (young, 0.228; aged, 0.428) (p < 0.05) during aging (in the whole age group and male/female subgroups). Decreased level of BCL2 mRNA may indicate about the suppression of BCL2-dependent prevention of apoptosis during aging in peripheral blood leukocytes. At the same time increased the level of STAT3 may suggest about activation of BCL2-independent prevention of apoptosis during aging.Keywords: BCL2, STAT3, aging, apoptosis
Procedia PDF Downloads 326956 Observation of the Orthodontic Tooth's Long-Term Movement Using Stereovision System
Authors: Hao-Yuan Tseng, Chuan-Yang Chang, Ying-Hui Chen, Sheng-Che Chen, Chih-Han Chang
Abstract:
Orthodontic tooth treatment has demonstrated a high success rate in clinical studies. It has been agreed upon that orthodontic tooth movement is based on the ability of surrounding bone and periodontal ligament (PDL) to react to a mechanical stimulus with remodeling processes. However, the mechanism of the tooth movement is still unclear. Recent studies focus on the simple principle compression-tension theory while rare studies directly measure tooth movement. Therefore, tracking tooth movement information during orthodontic treatment is very important in clinical practice. The aim of this study is to investigate the mechanism responses of the tooth movement during the orthodontic treatments. A stereovision system applied to track the tooth movement of the patient with the stamp brackets. The system was established by two cameras with their relative position calibrate. And the orthodontic force measured by 3D printing model with the six-axis load cell to determine the initial force application. The result shows that the stereovision system accuracy revealed the measurement presents a maximum error less than 2%. For the study on patient tracking, the incisor moved about 0.9 mm during 60 days tracking, and half of movement occurred in the first few hours. After removing the orthodontic force in 100 hours, the distance between before and after position incisor tooth decrease 0.5 mm consisted with the release of the phenomenon. Using the stereovision system can accurately locate the three-dimensional position of the teeth and superposition of 3D coordinate system for all the data to integrate the complex tooth movement.Keywords: orthodontic treatment, tooth movement, stereovision system, long-term tracking
Procedia PDF Downloads 421955 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector
Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu
Abstract:
In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis
Procedia PDF Downloads 205954 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation
Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau
Abstract:
In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa
Procedia PDF Downloads 156953 The Effectiveness of Summative Assessment in Practice Learning
Authors: Abdool Qaiyum Mohabuth, Syed Munir Ahmad
Abstract:
Assessment enables students to focus on their learning, assessment. It engages them to work hard and motivates them in devoting time to their studies. Student learning is directly influenced by the type of assessment involved in the programme. Summative Assessment aims at providing measurement of student understanding. In fact, it is argued that summative assessment is used for reporting and reviewing, besides providing an overall judgement of achievement. While summative assessment is a well defined process for learning that takes place in the classroom environment, its application within the practice environment is still being researched. This paper discusses findings from a mixed-method study for exploring the effectiveness of summative assessment in practice learning. A survey questionnaire was designed for exploring the perceptions of mentors and students about summative assessment in practice learning. The questionnaire was administered to the University of Mauritius students and mentors who supervised students for their Work-Based Learning (WBL) practice at the respective placement settings. Some students, having undertaken their WBL practice, were interviewed, for capturing their views and experiences about the application of summative assessment in practice learning. Semi-structured interviews were also conducted with three experienced mentors who have assessed students on practice learning. The findings reveal that though learning in the workplace is entirely different from learning at the University, most students had positive experiences about their summative assessments in practice learning. They felt comfortable and confident to be assessed by their mentors in their placement settings and wished that the effort and time that they devoted to their learning be recognised and valued. Mentors on their side confirmed that the summative assessment is valid and reliable, enabling them to better monitor and coach students to achieve the expected learning outcomes.Keywords: practice learning, judgement, summative assessment, knowledge, skills, workplace
Procedia PDF Downloads 341952 Preparation of Fe3Si/Ferrite Micro-and Nano-Powder Composite
Authors: Radovan Bures, Madgalena Streckova, Maria Faberova, Pavel Kurek
Abstract:
Composite material based on Fe3Si micro-particles and Mn-Zn nano-ferrite was prepared using powder metallurgy technology. The sol-gel followed by autocombustion process was used for synthesis of Mn0.8Zn0.2Fe2O4 ferrite. 3 wt.% of mechanically milled ferrite was mixed with Fe3Si powder alloy. Mixed micro-nano powder system was homogenized by the Resonant Acoustic Mixing using ResodynLabRAM Mixer. This non-invasive homogenization technique was used to preserve spherical morphology of Fe3Si powder particles. Uniaxial cold pressing in the closed die at pressure 600 MPa was applied to obtain a compact sample. Microwave sintering of green compact was realized at 800°C, 20 minutes, in air. Density of the powders and composite was measured by Hepycnometry. Impulse excitation method was used to measure elastic properties of sintered composite. Mechanical properties were evaluated by measurement of transverse rupture strength (TRS) and Vickers hardness (HV). Resistivity was measured by 4 point probe method. Ferrite phase distribution in volume of the composite was documented by metallographic analysis. It has been found that nano-ferrite particle distributed among micro- particles of Fe3Si powder alloy led to high relative density (~93%) and suitable mechanical properties (TRS >100 MPa, HV ~1GPa, E-modulus ~140 GPa) of the composite. High electric resistivity (R~6.7 ohm.cm) of prepared composite indicate their potential application as soft magnetic material at medium and high frequencies.Keywords: micro- and nano-composite, soft magnetic materials, microwave sintering, mechanical and electric properties
Procedia PDF Downloads 364951 Agile Smartphone Porting and App Integration of Signal Processing Algorithms Obtained through Rapid Development
Authors: Marvin Chibuzo Offiah, Susanne Rosenthal, Markus Borschbach
Abstract:
Certain research projects in Computer Science often involve research on existing signal processing algorithms and developing improvements on them. Research budgets are usually limited, hence there is limited time for implementing the algorithms from scratch. It is therefore common practice, to use implementations provided by other researchers as a template. These are most commonly provided in a rapid development, i.e. 4th generation, programming language, usually Matlab. Rapid development is a common method in Computer Science research for quickly implementing and testing new developed algorithms, which is also a common task within agile project organization. The growing relevance of mobile devices in the computer market also gives rise to the need to demonstrate the successful executability and performance measurement of these algorithms on a mobile device operating system and processor, particularly on a smartphone. Open mobile systems such as Android, are most suitable for this task, which is to be performed most efficiently. Furthermore, efficiently implementing an interaction between the algorithm and a graphical user interface (GUI) that runs exclusively on the mobile device is necessary in cases where the project’s goal statement also includes such a task. This paper examines different proposed solutions for porting computer algorithms obtained through rapid development into a GUI-based smartphone Android app and evaluates their feasibilities. Accordingly, the feasible methods are tested and a short success report is given for each tested method.Keywords: SMARTNAVI, Smartphone, App, Programming languages, Rapid Development, MATLAB, Octave, C/C++, Java, Android, NDK, SDK, Linux, Ubuntu, Emulation, GUI
Procedia PDF Downloads 478950 Process Assessment Model for Process Capability Determination Based on ISO/IEC 20000-1:2011
Authors: Harvard Najoan, Sarwono Sutikno, Yusep Rosmansyah
Abstract:
Most enterprises are now using information technology services as their assets to support business objectives. These kinds of services are provided by the internal service provider (inside the enterprise) or external service provider (outside enterprise). To deliver quality information technology services, the service provider (which from now on will be called ‘organization’) either internal or external, must have a standard for service management system. At present, the standard that is recognized as best practice for service management system for the organization is international standard ISO/IEC 20000:2011. The most important part of this international standard is the first part or ISO/IEC 20000-1:2011-Service Management System Requirement, because it contains 22 for organization processes as a requirement to be implemented in an organizational environment in order to build, manage and deliver quality service to the customer. Assessing organization management processes is the first step to implementing ISO/IEC 20000:2011 into the organization management processes. This assessment needs Process Assessment Model (PAM) as an assessment instrument. PAM comprises two parts: Process Reference Model (PRM) and Measurement Framework (MF). PRM is built by transforming the 22 process of ISO/IEC 20000-1:2011 and MF is based on ISO/IEC 33020. This assessment instrument was designed to assess the capability of service management process in Divisi Teknologi dan Sistem Informasi (Information Systems and Technology Division) as an internal organization of PT Pos Indonesia. The result of this assessment model can be proposed to improve the capability of service management system.Keywords: ISO/IEC 20000-1:2011, ISO/IEC 33020:2015, process assessment, process capability, service management system
Procedia PDF Downloads 465949 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 252948 Strategic Public Procurement: A Lever for Social Entrepreneurship and Innovation
Authors: B. Orser, A. Riding, Y. Li
Abstract:
To inform government about how gender gaps in SME ( small and medium-sized enterprise) contracting might be redressed, the research question was: What are the key obstacles to, and response strategies for, increasing the engagement of women business owners among SME suppliers to the government of Canada? Thirty-five interviews with senior policymakers, supplier diversity organization executives, and expert witnesses to the Canadian House of Commons, Standing Committee on Government Operations and Estimates. Qualitative data were conducted and analysed using N’Vivo 11 software. High order response categories included: (a) SME risk mitigation strategies, (b) SME procurement program design, and (c) performance measures. Primary obstacles cited were government red tape and long and complicated requests for proposals (RFPs). The majority of 'common' complaints occur when SMEs have questions about the federal procurement process. Witness responses included use of outcome-based rather than prescriptive procurement practices, more agile procurement, simplified RFPs, making payment within 30 days a procurement priority. Risk mitigation strategies included provision of procurement officers to assess risks and opportunities for businesses and development of more agile procurement procedures and processes. Recommendations to enhance program design included: improved definitional consistency of qualifiers and selection criteria, better co-ordination across agencies; clarification about how SME suppliers benefit from federal contracting; goal setting; specification of categories that are most suitable for women-owned businesses; and, increasing primary contractor awareness about the importance of subcontract relationships. Recommendations also included third-party certification of eligible firms and the need to enhance SMEs’ financial literacy to reduce financial errors. Finally, there remains the need for clear and consistent pre-program statistics to establish baselines (by sector, issuing department) performance measures, targets based on percentage of contracts granted, value of contract, percentage of target employee (women, indigenous), and community benefits including hiring local employees. The study advances strategies to enhance federal procurement programs to facilitate socio-economic policy objectives.Keywords: procurement, small business, policy, women
Procedia PDF Downloads 113947 Study on the Mechanical Properties of Bamboo Fiber-Reinforced Polypropylene Based Composites: Effect of Gamma Radiation
Authors: Kamrun N. Keya, Nasrin A. Kona, Ruhul A. Khan
Abstract:
Bamboo fiber (BF) reinforced polypropylene (PP) based composites were fabricated by a conventional compression molding technique. In this investigation, bamboo composites were manufactured using different percentages of fiber, which were varying from 25-65% on the total weight of the composites. To fabricate the BF/PP composites untreated and treated fibers were selected. A systematic study was done to observe the physical, mechanical, and interfacial behavior of the composites. In this study, mechanical properties of the composites such as tensile, impact, and bending properties were observed precisely. Maximum tensile strength (TS) and bending strength (BS) were found for 50 wt% fiber composites, 65 MPa, and 85.5 MPa respectively, whereas the highest tensile modulus (TM) and bending modulus (BM) was examined, 5.73 GPa and 7.85 GPa respectively. The BF/PP based composites were treated with irradiated under gamma radiation (the source strength 50 kCi Cobalt-60) of various doses (i.e. 10, 20, 30, 40, 50 and 60 kGy doses). The effect of gamma radiation on the composites was also investigated, and it found that the effect of 30.0 kGy (i.e. units for radiation measurement is 'gray', kGy=kilogray) gamma dose showed better mechanical properties than other doses. After flexural testing, fracture sides of the untreated and treated both composites were studied by scanning electron microscope (SEM). SEM results of the treated BF/PP based composites showed better fiber-matrix adhesion and interfacial bonding than untreated BF/PP based composites. Water uptake and soil degradation tests of untreated and treated composites were also investigated.Keywords: bamboo fiber, polypropylene, compression molding technique, gamma radiation, mechanical properties, scanning electron microscope
Procedia PDF Downloads 133946 Force Measurement for E-Cadherin-Mediated Intercellular Adhesion Probed by Protein Micropattern and Traction Force Microscopy
Authors: Chieh-Chung Tsou, Chun-Min Lo, Yeh-Shiu Chu
Abstract:
Cell’s mechanical forces provide important physical cues in regulation of proper cellular functions, such as cell differentiation, proliferation and migration. It is believed that adhesive forces generated by cell-cell interaction are able to transmit to the interior of cell through filamentous cortical cytoskeleton. Prominent among other membrane receptors, Cadherins are prototypical adhesive molecules able to generate remarkable forces to regulate intercellular adhesion. However, the mechanistic steps of mechano-transduction in Cadherin-mediated adhesion remain very controversial. We are interested in understanding how Cadherin protein complexes enable force generation and transmission at cell-cell contact in the initial stage of intercellular adhesion. For providing a better control of time, space, and substrate stiffness, in this study, a combination of protein micropattern, micropipette manipulation, and traction force microscopy is used. Pair micropattern with different forms confines cell spreading area and the gaps in pairs varied from 2 to 8 microns are applied for monitoring the forces that cell pairs generated, measured by traction force microscopy. Moreover, cell clones obtained from epithelial cells undergone genome editing are used to score the importance for known components of Cadherin complexes in force generation. We believe that our results from this combinatory mechanobiological method will provide deep insights on understanding the biophysical principle governing mechano- transduction of Cadherin-mediated intercellular adhesion.Keywords: cadherin, intercellular adhesion, protein micropattern, traction force microscopy
Procedia PDF Downloads 251945 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 51944 Enhancing Cybersecurity Protective Behaviour: Role of Information Security Competencies and Procedural Information Security Countermeasure Awareness
Authors: Norshima Humaidi, Saif Hussein Abdallah Alghazo
Abstract:
Cybersecurity threat have become a serious issue recently, and one of the cause is because human error, which is usually constituted by carelessness, ignorance, and failure to practice cybersecurity behaviour adequately. Using a data from a quantitative survey, Partial Least Squares-Structural Equation Modelling (PLS-SEM) analysis was used to determine the factors that affect cybersecurity protective behaviour (CPB). This study adapts cybersecurity protective behaviour model by focusing on two constructs that can enhance CPB: manager’s information security competencies (MISI) and procedural information security countermeasure (PCM) awareness. Theory of leadership competencies were adapted to measure user’s perception towards competencies among security managers/leader in the organization. Confirmatory factor analysis (CFA) testing shows that all the measurement items of each constructs were adequate in their validity individually based on their factor loading value. Moreover, each constructs are valid based on their parameter estimates and statistical significance. The quantitative research findings show that PCM awareness strongly influences CPB compared to MISI. Meanwhile, MISI was significantlyPCM awarenss. This study believes that the research findings can contribute to human behaviour in IS studies and are particularly beneficial to policy makers in improving organizations’ strategic plans in information security, especially in this new era. Most organizations spend time and resources to provide and establish strategic plans of information security; however, if employees are not willing to comply and practice information security behaviour appropriately, then these efforts are in vain.Keywords: cybersecurity, protection behaviour, information security, information security competencies, countermeasure awareness
Procedia PDF Downloads 95943 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)
Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro
Abstract:
In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.Keywords: XBRL, financial reporting, internet, internal and external reports
Procedia PDF Downloads 286942 Development of a Practical Screening Measure for the Prediction of Low Birth Weight and Neonatal Mortality in Upper Egypt
Authors: Prof. Ammal Mokhtar Metwally, Samia M. Sami, Nihad A. Ibrahim, Fatma A. Shaaban, Iman I. Salama
Abstract:
Objectives: Reducing neonatal mortality by 2030 is still a challenging goal in developing countries. low birth weight (LBW) is a significant contributor to this, especially where weighing newborns is not possible routinely. The present study aimed to determine a simple, easy, reliable anthropometric measure(s) that can predict LBW) and neonatal mortality. Methods: A prospective cohort study of 570 babies born in districts of El Menia governorate, Egypt (where most deliveries occurred at home) was examined at birth. Newborn weight, length, head, chest, mid-arm, and thigh circumferences were measured. Follow up of the examined neonates took place during their first four weeks of life to report any mortalities. The most predictable anthropometric measures were determined using the statistical package of SPSS, and multiple Logistic regression analysis was performed.: Results: Head and chest circumferences with cut-off points < 33 cm and ≤ 31.5 cm, respectively, were the significant predictors for LBW. They carried the best combination of having the highest sensitivity (89.8 % & 86.4 %) and least false negative predictive value (1.4 % & 1.7 %). Chest circumference with a cut-off point ≤ 31.5 cm was the significant predictor for neonatal mortality with 83.3 % sensitivity and 0.43 % false negative predictive value. Conclusion: Using chest circumference with a cut-off point ≤ 31.5 cm is recommended as a single simple anthropometric measurement for the prediction of both LBW and neonatal mortality. The predicted measure could act as a substitute for weighting newborns in communities where scales to weigh them are not routinely available.Keywords: low birth weight, neonatal mortality, anthropometric measures, practical screening
Procedia PDF Downloads 99