Search results for: measurement accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6127

Search results for: measurement accuracy

5017 Design and Development of Optical Sensor Based Ground Reaction Force Measurement Platform for GAIT and Geriatric Studies

Authors: K. Chethana, A. S. Guru Prasad, S. N. Omkar, B. Vadiraj, S. Asokan

Abstract:

This paper describes an ab-initio design, development and calibration results of an Optical Sensor Ground Reaction Force Measurement Platform (OSGRFP) for gait and geriatric studies. The developed system employs an array of FBG sensors to measure the respective ground reaction forces from all three axes (X, Y and Z), which are perpendicular to each other. The novelty of this work is two folded. One is in its uniqueness to resolve the tri axial resultant forces during the stance in to the respective pure axis loads and the other is the applicability of inherently advantageous FBG sensors which are most suitable for biomechanical instrumentation. To validate the response of the FBG sensors installed in OSGRFP and to measure the cross sensitivity of the force applied in other directions, load sensors with indicators are used. Further in this work, relevant mathematical formulations are presented for extracting respective ground reaction forces from wavelength shifts/strain of FBG sensors on the OSGRFP. The result of this device has implications in understanding the foot function, identifying issues in gait cycle and measuring discrepancies between left and right foot. The device also provides a method to quantify and compare relative postural stability of different subjects under test, which has implications in post surgical rehabilitation, geriatrics and optimizing training protocols for sports personnel.

Keywords: balance and stability, gait analysis, FBG applications, optical sensor ground reaction force platform

Procedia PDF Downloads 404
5016 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 223
5015 Synthesis of ZnO Nanoparticles with Varying Calcination Temperature for Photocatalytic Degradation of Ethylbenzene

Authors: Darlington Ashiegbu, Herman Johannes Potgieter

Abstract:

The increasing utilization of Zinc Oxide (ZnO) as a better alternative to TiO₂ has been attributed to its wide bandgap (3.37eV), lower production cost, ability to absorb over a larger range of the UV-spectrum and higher efficiency in some cases. ZnO nanoparticles were synthesized via sol-gel process and calcined at 400ᵒC, 500ᵒC, and 650ᵒC. The as-synthesized nanoparticles were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDS), and Brunauer–Emmett–Teller (BET) surface area measurement. Scanning electron micrograph revealed pseudo-spherical and rod-like morphologies and a high rate of agglomeration for the sample calcined at 650ᵒC, Brunnauer Emmett Teller (BET) surface area measurement was highest in the sample calcined at 500ᵒC, energy dispersive X-ray spectroscopy (EDS) results confirmed the purity of the samples as only Zn and O₂ were detected and X-ray diffraction (XRD) results revealed crystalline hexagonal wurtzite structure of the ZnO nanoparticles. All three samples were utilized in the degradation of ethylbenzene, and a UV-Vis spectrophotometer was utilized in monitoring degradation of ethylbenzene. The sample calcined at 500ᵒC had the highest surface area for reaction, lowest agglomeration and the highest photocatalytic activity in the degradation of ethylbenzene. This revealed temperature as a very important factor in improved and higher photocatalytic activity.

Keywords: ethylbenzene, pseudo-spherical, sol-gel, zinc oxide

Procedia PDF Downloads 164
5014 A Numerical Study of the Tidal Currents in the Persian Gulf and Oman Sea

Authors: Fatemeh Sadat Sharifi, A. A. Bidokhti, M. Ezam, F. Ahmadi Givi

Abstract:

This study focuses on the tidal oscillation and its speed to create a general pattern in seas. The purpose of the analysis is to find out the amplitude and phase for several important tidal components. Therefore, Regional Ocean Models (ROMS) was rendered to consider the correlation and accuracy of this pattern. Finding tidal harmonic components allows us to predict tide at this region. Better prediction of these tides, making standard platform, making suitable wave breakers, helping coastal building, navigation, fisheries, port management and tsunami research. Result shows a fair accuracy in the SSH. It reveals tidal currents are highest in Hormuz Strait and the narrow and shallow region between Kish Island. To investigate flow patterns of the region, the results of limited size model of FVCOM were utilized. Many features of the present day view of ocean circulation have some precedent in tidal and long- wave studies. Tidal waves are categorized to be among the long waves. So that tidal currents studies have indeed effects in subsequent studies of sea and ocean circulations.

Keywords: barotropic tide, FVCOM, numerical model, OTPS, ROMS

Procedia PDF Downloads 236
5013 Land Cover Classification Using Sentinel-2 Image Data and Random Forest Algorithm

Authors: Thanh Noi Phan, Martin Kappas, Jan Degener

Abstract:

The currently launched Sentinel 2 (S2) satellite (June, 2015) bring a great potential and opportunities for land use/cover map applications, due to its fine spatial resolution multispectral as well as high temporal resolutions. So far, there are handful studies using S2 real data for land cover classification. Especially in northern Vietnam, to our best knowledge, there exist no studies using S2 data for land cover map application. The aim of this study is to provide the preliminary result of land cover classification using Sentinel -2 data with a rising state – of – art classifier, Random Forest. A case study with heterogeneous land use/cover in the eastern of Hanoi Capital – Vietnam was chosen for this study. All 10 spectral bands of 10 and 20 m pixel size of S2 images were used, the 10 m bands were resampled to 20 m. Among several classified algorithms, supervised Random Forest classifier (RF) was applied because it was reported as one of the most accuracy methods of satellite image classification. The results showed that the red-edge and shortwave infrared (SWIR) bands play an important role in land cover classified results. A very high overall accuracy above 90% of classification results was achieved.

Keywords: classify algorithm, classification, land cover, random forest, sentinel 2, Vietnam

Procedia PDF Downloads 389
5012 A Real Time Ultra-Wideband Location System for Smart Healthcare

Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang

Abstract:

Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.

Keywords: intelligent monitoring, ultra-wideband technology, real-time location, IoT devices, smart healthcare

Procedia PDF Downloads 141
5011 An Intelligent Traffic Management System Based on the WiFi and Bluetooth Sensing

Authors: Hamed Hossein Afshari, Shahrzad Jalali, Amir Hossein Ghods, Bijan Raahemi

Abstract:

This paper introduces an automated clustering solution that applies to WiFi/Bluetooth sensing data and is later used for traffic management applications. The paper initially summarizes a number of clustering approaches and thereafter shows their performance for noise removal. In this context, clustering is used to recognize WiFi and Bluetooth MAC addresses that belong to passengers traveling by a public urban transit bus. The main objective is to build an intelligent system that automatically filters out MAC addresses that belong to persons located outside the bus for different routes in the city of Ottawa. The proposed intelligent system alleviates the need for defining restrictive thresholds that however reduces the accuracy as well as the range of applicability of the solution for different routes. This paper moreover discusses the performance benefits of the presented clustering approaches in terms of the accuracy, time and space complexity, and the ease of use. Note that results of clustering can further be used for the purpose of the origin-destination estimation of individual passengers, predicting the traffic load, and intelligent management of urban bus schedules.

Keywords: WiFi-Bluetooth sensing, cluster analysis, artificial intelligence, traffic management

Procedia PDF Downloads 242
5010 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 81
5009 Single Atom Manipulation with 4 Scanning Tunneling Microscope Technique

Authors: Jianshu Yang, Delphine Sordes, Marek Kolmer, Christian Joachim

Abstract:

Nanoelectronics, for example the calculating circuits integrating at molecule scale logic gates, atomic scale circuits, has been constructed and investigated recently. A major challenge is their functional properties characterization because of the connecting problem from atomic scale to micrometer scale. New experimental instruments and new processes have been proposed therefore. To satisfy a precisely measurement at atomic scale and then connecting micrometer scale electrical integration controller, the technique improvement is kept on going. Our new machine, a low temperature high vacuum four scanning tunneling microscope, as a customer required instrument constructed by Omicron GmbH, is expected to be scaling down to atomic scale characterization. Here, we will present our first testified results about the performance of this new instrument. The sample we selected is Au(111) surface. The measurements have been taken at 4.2 K. The atomic resolution surface structure was observed with each of four scanners with noise level better than 3 pm. With a tip-sample distance calibration by I-z spectra, the sample conductance has been derived from its atomic locally I-V spectra. Furthermore, the surface conductance measurement has been performed using two methods, (1) by landing two STM tips on the surface with sample floating; and (2) by sample floating and one of the landed tips turned to be grounding. In addition, single atom manipulation has been achieved with a modified tip design, which is comparable to a conventional LT-STM.

Keywords: low temperature ultra-high vacuum four scanning tunneling microscope, nanoelectronics, point contact, single atom manipulation, tunneling resistance

Procedia PDF Downloads 280
5008 The Reliability of Management Earnings Forecasts in IPO Prospectuses: A Study of Managers’ Forecasting Preferences

Authors: Maha Hammami, Olfa Benouda Sioud

Abstract:

This study investigates the reliability of management earnings forecasts with reference to these two ingredients: verifiability and neutrality. Specifically, we examine the biasedness (or accuracy) of management earnings forecasts and company specific characteristics that can be associated with accuracy. Based on sample of 102 IPO prospectuses published for admission on NYSE Euronext Paris from 2002 to 2010, we found that these forecasts are on average optimistic and two of the five test variables, earnings variability and financial leverage are significant in explaining ex post bias. Acknowledging the possibility that the bias is the result of the managers’ forecasting behavior, we then examine whether managers decide to under-predict, over-predict or forecast accurately for self-serving purposes. Explicitly, we examine the role of financial distress, operating performance, ownership by insiders and the economy state in influencing managers’ forecasting preferences. We find that managers of distressed firms seem to over-predict future earnings. We also find that when managers are given more stock options, they tend to under-predict future earnings. Finally, we conclude that the management earnings forecasts are affected by an intentional bias due to managers’ forecasting preferences.

Keywords: intentional bias, management earnings forecasts, neutrality, verifiability

Procedia PDF Downloads 235
5007 3D Reconstruction of Human Body Based on Gender Classification

Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo

Abstract:

SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models.

Keywords: gender classification, joint detection, SMPL-X, 3D reconstruction

Procedia PDF Downloads 70
5006 Microfabrication of Three-Dimensional SU-8 Structures Using Positive SPR Photoresist as a Sacrificial Layer for Integration of Microfluidic Components on Biosensors

Authors: Su Yin Chiam, Qing Xin Zhang, Jaehoon Chung

Abstract:

Complementary metal-oxide-semiconductor (CMOS) integrated circuits (ICs) have obtained increased attention in the biosensor community because CMOS technology provides cost-effective and high-performance signal processing at a mass-production level. In order to supply biological samples and reagents effectively to the sensing elements, there are increasing demands for seamless integration of microfluidic components on the fabricated CMOS wafers by post-processing. Although the PDMS microfluidic channels replicated from separately prepared silicon mold can be typically aligned and bonded onto the CMOS wafers, it remains challenging owing the inherently limited aligning accuracy ( > ± 10 μm) between the two layers. Here we present a new post-processing method to create three-dimensional microfluidic components using two different polarities of photoresists, an epoxy-based negative SU-8 photoresist and positive SPR220-7 photoresist. The positive photoresist serves as a sacrificial layer and the negative photoresist was utilized as a structural material to generate three-dimensional structures. Because both photoresists are patterned using a standard photolithography technology, the dimensions of the structures can be effectively controlled as well as the alignment accuracy, moreover, is dramatically improved (< ± 2 μm) and appropriately can be adopted as an alternative post-processing method. To validate the proposed processing method, we applied this technique to build cell-trapping structures. The SU8 photoresist was mainly used to generate structures and the SPR photoresist was used as a sacrificial layer to generate sub-channel in the SU8, allowing fluid to pass through. The sub-channel generated by etching the sacrificial layer works as a cell-capturing site. The well-controlled dimensions enabled single-cell capturing on each site and high-accuracy alignment made cells trapped exactly on the sensing units of CMOS biosensors.

Keywords: SU-8, microfluidic, MEMS, microfabrication

Procedia PDF Downloads 523
5005 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies

Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi

Abstract:

Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.

Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)

Procedia PDF Downloads 213
5004 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 402
5003 A Method for Precise Vertical Position of the Implant When Using Computerized Surgical Guides and Bone Reduction

Authors: Abraham Finkelman

Abstract:

Computerized Surgical Guides have been proven to be a predictable way to perform dental implants, with a relatively high accuracy in comparison to a treatment plan. When using the CSG Bone supported, it allows us to make the necessary changes of the hard tissue prior to the implant placement and after the implant placement. The CSG gives us an accurate position for the drilling, and during the implant placement it allows us to alter the vertical position of the implant altering the final position of the abutment and avoiding any risk of any damage to the adjacent anatomical structures. Any Changes required to the bone level can be done prior to the fixation of the CSG using a reduction guide, which incur extra surgical fees and the need of a second surgical guide. Any changes of the bone level after the implant placement are at the risk of damaging the implant neck surface. The technique consists of a universal system that allows us to remove the excess bone around the implant sockets prior to the implant placement which then enables us to place the implant in the vertical position with accuracy as planned with the CSG. The systems consist of a hollow pin of different sizes and diameters. Depending on the implant system that we are using. Length sizes are from 6mm-16mm and a diameter of 2.6mm-4.8mm. Upon the completion of the drilling, the pin is then inserted into the implant socket-using the insertion tool. Once the insertion tool has unscrewed the pin, we can continue with the bone reduction. The bone reduction can be done using conventional methods upon the removal of all the excess bone around the pin. The insertion tool is then screwed into the pin and the pin is then removed. We now, have the new bone level at the crest of the implant socket which is our mark for the vertical position of the implant. In some cases, when we are locating the implant very close to anatomical structures, any form of deviation to the vertical position of the implant during the surgery, can cause damage to such anatomical structures, creating irreversible damages such as paresthesia or dysesthesia of the mandibular nerve. If we are planning for immediate loading and we have done our temporary restauration in base of our computerized plan, deviation in the vertical position of the implant will affect the position of the abutment, affecting the accuracy of the temporary prosthesis, extending the working time till we adapt the prosthesis to the new position.

Keywords: bone reduction, computer aided navigation, dental implant placement, surgical guides

Procedia PDF Downloads 332
5002 Immunization-Data-Quality in Public Health Facilities in the Pastoralist Communities: A Comparative Study Evidence from Afar and Somali Regional States, Ethiopia

Authors: Melaku Tsehay

Abstract:

The Consortium of Christian Relief and Development Associations (CCRDA), and the CORE Group Polio Partners (CGPP) Secretariat have been working with Global Alliance for Vac-cines and Immunization (GAVI) to improve the immunization data quality in Afar and Somali Regional States. The main aim of this study was to compare the quality of immunization data before and after the above interventions in health facilities in the pastoralist communities in Ethiopia. To this end, a comparative-cross-sectional study was conducted on 51 health facilities. The baseline data was collected in May 2019, while the end line data in August 2021. The WHO data quality self-assessment tool (DQS) was used to collect data. A significant improvment was seen in the accuracy of the pentavalent vaccine (PT)1 (p = 0.012) data at the health posts (HP), while PT3 (p = 0.010), and Measles (p = 0.020) at the health centers (HC). Besides, a highly sig-nificant improvment was observed in the accuracy of tetanus toxoid (TT)2 data at HP (p < 0.001). The level of over- or under-reporting was found to be < 8%, at the HP, and < 10% at the HC for PT3. The data completeness was also increased from 72.09% to 88.89% at the HC. Nearly 74% of the health facilities timely reported their respective immunization data, which is much better than the baseline (7.1%) (p < 0.001). These findings may provide some hints for the policies and pro-grams targetting on improving immunization data qaulity in the pastoralist communities.

Keywords: data quality, immunization, verification factor, pastoralist region

Procedia PDF Downloads 125
5001 Emotion Classification Using Recurrent Neural Network and Scalable Pattern Mining

Authors: Jaishree Ranganathan, MuthuPriya Shanmugakani Velsamy, Shamika Kulkarni, Angelina Tzacheva

Abstract:

Emotions play an important role in everyday life. An-alyzing these emotions or feelings from social media platforms like Twitter, Facebook, blogs, and forums based on user comments and reviews plays an important role in various factors. Some of them include brand monitoring, marketing strategies, reputation, and competitor analysis. The opinions or sentiments mined from such data helps understand the current state of the user. It does not directly provide intuitive insights on what actions to be taken to benefit the end user or business. Actionable Pattern Mining method provides suggestions or actionable recommendations on what changes or actions need to be taken in order to benefit the end user. In this paper, we propose automatic classification of emotions in Twitter data using Recurrent Neural Network - Gated Recurrent Unit. We achieve training accuracy of 87.58% and validation accuracy of 86.16%. Also, we extract action rules with respect to the user emotion that helps to provide actionable suggestion.

Keywords: emotion mining, twitter, recurrent neural network, gated recurrent unit, actionable pattern mining

Procedia PDF Downloads 168
5000 Measurement of Project Success in Construction Using Performance Indices

Authors: Annette Joseph

Abstract:

Background: The construction industry is dynamic in nature owing to the increasing uncertainties in technology, budgets, and development processes making projects more complex. Thus, predicting project performance and chances of its likely success has become difficult. The goal of all parties involved in construction projects is to successfully complete it on schedule, within planned budget and with the highest quality and in the safest manner. However, the concept of project success has remained ambiguously defined in the mind of the construction professionals. Purpose: This paper aims to study the analysis of a project in terms of its performance and measure the success. Methodology: The parameters for evaluating project success and the indices to measure success/performance of a project are identified through literature study. Through questionnaire surveys aimed at the stakeholders in the projects, data is collected from two live case studies (an ongoing and completed project) on the overall performance in terms of its success/failure. Finally, with the help of SPSS tool, the data collected from the surveys are analyzed and applied on the selected performance indices. Findings: The score calculated by using the indices and models helps in assessing the overall performance of the project and interpreting it to find out whether the project will be a success or failure. This study acts as a reference for firms to carry out performance evaluation and success measurement on a regular basis helping projects to identify the areas which are performing well and those that require improvement. Originality & Value: The study signifies that by measuring project performance; a project’s deviation towards success/failure can be assessed thus helping in suggesting early remedial measures to bring it on track ensuring that a project will be completed successfully.

Keywords: project, performance, indices, success

Procedia PDF Downloads 192
4999 Short-Term Load Forecasting Based on Variational Mode Decomposition and Least Square Support Vector Machine

Authors: Jiangyong Liu, Xiangxiang Xu, Bote Luo, Xiaoxue Luo, Jiang Zhu, Lingzhi Yi

Abstract:

To address the problems of non-linearity and high randomness of the original power load sequence causing the degradation of power load forecasting accuracy, a short-term load forecasting method is proposed. The method is based on the Least Square Support Vector Machine optimized by an Improved Sparrow Search Algorithm combined with the Variational Mode Decomposition proposed in this paper. The application of the variational mode decomposition technique decomposes the raw power load data into a series of Intrinsic Mode Functions components, which can reduce the complexity and instability of the raw data while overcoming modal confounding; the proposed improved sparrow search algorithm can solve the problem of difficult selection of learning parameters in the least Square Support Vector Machine. Finally, through comparison experiments, the results show that the method can effectively improve prediction accuracy.

Keywords: load forecasting, variational mode decomposition, improved sparrow search algorithm, least square support vector machine

Procedia PDF Downloads 111
4998 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach

Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller

Abstract:

Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.

Keywords: calibration model, celitement, cementitious material, NIR spectroscopy

Procedia PDF Downloads 501
4997 Fabrication Characteristics and Mechanical Behaviour of Fly Ash-Alumina Reinforced Zn-27Al Alloy Matrix Hybrid Composite Using Stir-Casting Technique

Authors: Oluwagbenga B. Fatile, Felix U. Idu, Olajide T. Sanya

Abstract:

This paper reports the viability of developing Zn-27Al alloy matrix hybrid composites reinforced with alumina, graphite and fly ash (a solid waste byproduct of coal in thermal power plants). This research work was aimed at developing low cost-high performance Zn-27Al matrix composite with low density. Alumina particulates (Al2O3), graphite added with 0, 2, 3, 4, and 5 wt% fly ash were utilized to prepare 10wt% reinforcing phase with Zn-27Al alloy as matrix using two-step stir casting method. Density measurement estimated percentage porosity, tensile testing, micro hardness measurement, and optical microscopy were used to assess the performance of the composites produced. The results show that the hardness, ultimate tensile strength, and percent elongation of the hybrid composites decrease with increase in fly ash content. The maximum decrease in hardness and ultimate tensile strength of 13.72% and 15.25% respectively were observed for composite grade containing 5wt% fly ash. The percentage elongation of composite sample without fly ash is 8.9% which is comparable with that of the sample containing 2wt% fly ash with percentage elongation of 8.8%. The fracture toughness of the fly ash containing composites was, however, superior to those of composites without fly ash with 5wt% fly ash containing composite exhibiting the highest fracture toughness. The results show that fly ash can be utilized as complementary reinforcement in ZA-27 alloy matrix composite to reduce cost.

Keywords: fly ash, hybrid composite, mechanical behaviour, stir-cast

Procedia PDF Downloads 335
4996 Audio-Lingual Method and the English-Speaking Proficiency of Grade 11 Students

Authors: Marthadale Acibo Semacio

Abstract:

Speaking skill is a crucial part of English language teaching and learning. This actually shows the great importance of this skill in English language classes. Through speaking, ideas and thoughts are shared with other people, and a smooth interaction between people takes place. The study examined the levels of speaking proficiency of the control and experimental groups on pronunciation, grammatical accuracy, and fluency. As a quasi-experimental study, it also determined the presence or absence of significant changes in their speaking proficiency levels in terms of pronouncing the words correctly, the accuracy of grammar and fluency of a language given the two methods to the groups of students in the English language, using the traditional and audio-lingual methods. Descriptive and inferential statistics were employed according to the stated specific problems. The study employed a video presentation with prior information about it. In the video, the teacher acts as model one, giving instructions on what is going to be done, and then the students will perform the activity. The students were paired purposively based on their learning capabilities. Observing proper ethics, their performance was audio recorded to help the researcher assess the learner using the modified speaking rubric. The study revealed that those under the traditional method were more fluent than those in the audio-lingual method. With respect to the way in which each method deals with the feelings of the student, the audio-lingual one fails to provide a principle that would relate to this area and follows the assumption that the intrinsic motivation of the students to learn the target language will spring from their interest in the structure of the language. However, the speaking proficiency levels of the students were remarkably reinforced in reading different words through the aid of aural media with their teachers. The study concluded that using an audio-lingual method of teaching is not a stand-alone method but only an aid of the teacher in helping the students improve their speaking proficiency in the English Language. Hence, audio-lingual approach is encouraged to be used in teaching English language, on top of the chalk-talk or traditional method, to improve the speaking proficiency of students.

Keywords: audio-lingual, speaking, grammar, pronunciation, accuracy, fluency, proficiency

Procedia PDF Downloads 70
4995 An Experimental Study for Assessing Email Classification Attributes Using Feature Selection Methods

Authors: Issa Qabaja, Fadi Thabtah

Abstract:

Email phishing classification is one of the vital problems in the online security research domain that have attracted several scholars due to its impact on the users payments performed daily online. One aspect to reach a good performance by the detection algorithms in the email phishing problem is to identify the minimal set of features that significantly have an impact on raising the phishing detection rate. This paper investigate three known feature selection methods named Information Gain (IG), Chi-square and Correlation Features Set (CFS) on the email phishing problem to separate high influential features from low influential ones in phishing detection. We measure the degree of influentially by applying four data mining algorithms on a large set of features. We compare the accuracy of these algorithms on the complete features set before feature selection has been applied and after feature selection has been applied. After conducting experiments, the results show 12 common significant features have been chosen among the considered features by the feature selection methods. Further, the average detection accuracy derived by the data mining algorithms on the reduced 12-features set was very slight affected when compared with the one derived from the 47-features set.

Keywords: data mining, email classification, phishing, online security

Procedia PDF Downloads 433
4994 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation

Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang

Abstract:

Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.

Keywords: CCS concepts, computing methodologies, interest point, salient region detections, image segmentation

Procedia PDF Downloads 132
4993 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 364
4992 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 116
4991 Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Authors: Megha Gupta, Nupur Prakash

Abstract:

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network (CNN) architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

Keywords: comparative analysis, convolutional neural networks, deep learning, plant disease identification

Procedia PDF Downloads 200
4990 Review of Research on Effectiveness Evaluation of Technology Innovation Policy

Authors: Xue Wang, Li-Wei Fan

Abstract:

The technology innovation has become the driving force of social and economic development and transformation. The guidance and support of public policies is an important condition to promote the realization of technology innovation goals. Policy effectiveness evaluation is instructive in policy learning and adjustment. This paper reviews existing studies and systematically evaluates the effectiveness of policy-driven technological innovation. We used 167 articles from WOS and CNKI databases as samples to clarify the measurement of technological innovation indicators and analyze the classification and application of policy evaluation methods. In general, technology innovation input and technological output are the two main aspects of technological innovation index design, among which technological patents are the focus of research, the number of patents reflects the scale of technological innovation, and the quality of patents reflects the value of innovation from multiple aspects. As for policy evaluation methods, statistical analysis methods are applied to the formulation, selection and evaluation of the after-effect of policies to analyze the effect of policy implementation qualitatively and quantitatively. The bibliometric methods are mainly based on the public policy texts, discriminating the inter-government relationship and the multi-dimensional value of the policy. Decision analysis focuses on the establishment and measurement of the comprehensive evaluation index system of public policy. The economic analysis methods focus on the performance and output of technological innovation to test the policy effect. Finally, this paper puts forward the prospect of the future research direction.

Keywords: technology innovation, index, policy effectiveness, evaluation of policy, bibliometric analysis

Procedia PDF Downloads 72
4989 Diabetes Diagnosis Model Using Rough Set and K- Nearest Neighbor Classifier

Authors: Usiobaifo Agharese Rosemary, Osaseri Roseline Oghogho

Abstract:

Diabetes is a complex group of disease with a variety of causes; it is a disorder of the body metabolism in the digestion of carbohydrates food. The application of machine learning in the field of medical diagnosis has been the focus of many researchers and the use of recognition and classification model as a decision support tools has help the medical expert in diagnosis of diseases. Considering the large volume of medical data which require special techniques, experience, and high diagnostic skill in the diagnosis of diseases, the application of an artificial intelligent system to assist medical personnel in order to enhance their efficiency and accuracy in diagnosis will be an invaluable tool. In this study will propose a diabetes diagnosis model using rough set and K-nearest Neighbor classifier algorithm. The system consists of two modules: the feature extraction module and predictor module, rough data set is used to preprocess the attributes while K-nearest neighbor classifier is used to classify the given data. The dataset used for this model was taken for University of Benin Teaching Hospital (UBTH) database. Half of the data was used in the training while the other half was used in testing the system. The proposed model was able to achieve over 80% accuracy.

Keywords: classifier algorithm, diabetes, diagnostic model, machine learning

Procedia PDF Downloads 336
4988 Study on Carbon Nanostructures Influence on Changes in Static Friction Forces

Authors: Rafał Urbaniak, Robert Kłosowiak, Michał Ciałkowski, Jarosław Bartoszewicz

Abstract:

The Chair of Thermal Engineering at Poznan University of Technology has been conducted research works on the possibilities of using carbon nanostructures in energy and mechanics applications for a couple of years. Those studies have provided results in a form of co-operation with foreign research centres, numerous publications and patent applications. Authors of this paper have studied the influence of multi-walled carbon nanostructures on changes in static friction arising when steel surfaces were moved. Tests were made using the original test stand consisting of automatically controlled inclined plane driven by precise stepper motors. Computer program created in the LabView environment was responsible for monitoring of the stand operation, accuracy of measurements and archiving the obtained results. Such a solution enabled to obtain high accuracy and repeatability of all conducted experiments. Tests and analysis of the obtained results allowed us to determine how additional layers of carbon nanostructures influenced on changes of static friction coefficients. At the same time, we analyzed the potential possibilities of applying nanostructures under consideration in mechanics.

Keywords: carbon nanotubes, static friction, dynamic friction

Procedia PDF Downloads 315