Search results for: academic speed and accuracy
6280 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor
Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh
Abstract:
Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging
Procedia PDF Downloads 2666279 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study
Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre
Abstract:
Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.
Procedia PDF Downloads 1156278 Millimeter-Wave Silicon Power Amplifiers for 5G Wireless Communications
Authors: Kyoungwoon Kim, Cuong Huynh, Cam Nguyen
Abstract:
Exploding demands for more data, faster data transmission speed, less interference, more users, more wireless devices, and better reliable service-far exceeding those provided in the current mobile communications networks in the RF spectrum below 6 GHz-has led the wireless communication industry to focus on higher, previously unallocated spectrums. High frequencies in RF spectrum near (around 28 GHz) or within the millimeter-wave regime is the logical solution to meet these demands. This high-frequency RF spectrum is of increasingly important for wireless communications due to its large available bandwidths that facilitate various applications requiring large-data high-speed transmissions, reaching up to multi-gigabit per second, of vast information. It also resolves the traffic congestion problems of signals from many wireless devices operating in the current RF spectrum (below 6 GHz), hence handling more traffic. Consequently, the wireless communication industries are moving towards 5G (fifth generation) for next-generation communications such as mobile phones, autonomous vehicles, virtual reality, and the Internet of Things (IoT). The U.S. Federal Communications Commission (FCC) proved on 14th July 2016 three frequency bands for 5G around 28, 37 and 39 GHz. We present some silicon-based RFIC power amplifiers (PA) for possible implementation for 5G wireless communications around 28, 37 and 39 GHz. The 16.5-28 GHz PA exhibits measured gain of more than 34.5 dB and very flat output power of 19.4±1.2 dBm across 16.5-28 GHz. The 25.5/37-GHz PA exhibits gain of 21.4 and 17 dB, and maximum output power of 16 and 13 dBm at 25.5 and 37 GHz, respectively, in the single-band mode. In the dual-band mode, the maximum output power is 13 and 9.5 dBm at 25.5 and 37 GHz, respectively. The 10-19/23-29/33-40 GHz PA has maximum output powers of 15, 13.3, and 13.8 dBm at 15, 25, and 35 GHz, respectively, in the single-band mode. When this PA is operated in dual-band mode, it has maximum output powers of 11.4/8.2 dBm at 15/25 GHz, 13.3/3 dBm at 15/35 GHz, and 8.7/6.7 dBm at 25/35 GHz. In the tri-band mode, it exhibits 8.8/5.4/3.8 dBm maximum output power at 15/25/35 GHz. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authorsKeywords: Microwaves, Millimeter waves, Power Amplifier, Wireless communications
Procedia PDF Downloads 1936277 Forecasting Residential Water Consumption in Hamilton, New Zealand
Authors: Farnaz Farhangi
Abstract:
Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model
Procedia PDF Downloads 3426276 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao
Abstract:
Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive
Procedia PDF Downloads 1776275 One-Hit Multiple Instance Logistic Regression for Binary Classification and Its Application to Atomic Force Microscopy Images for Bladder Cancer Determination
Authors: Eugene Demidenko, John Seigne, Igor Sokolov
Abstract:
Multiple instance classification is a known machine learning tech-nique when only a bag of features is labeled. The method of binary multiple instance classification, termed multiple instance logistic regression (LR), received the most attention as a well-defined statistical model. This algorithm is realized in several computer languages, including R (milr) and MATLAB. This work suggests improving this model, which is called the one-hit multiple instance LR. Unlike the existing ap-proach, where unknown labels are treated as missing observations, our model directly implements the ML approach. As such, it is methodologically straightforward and computationally stable, especially when features are highly correlated and/or bags are heterogeneous. Since the one-hit LR admits a closed form for the log-likelihood function, an efficient Fisher scoring algorithm applies with the variances of the regres-sion coefficients computed through the inverse of the Fisher information matrix at the final iteration. Numerical experiments demonstrate the superiority of the one-hit LR in terms of regression coefficients and classification accuracy. Another advantage of our approach is developing the optimal probability threshold for classification (the traditional threshold equals 0 5). The one-hit LR is illustrated with a noninvasive bladder cancer identification where each patient, in the multiple instance terminol-ogy ’bag,’ contains feature images of multiple cells from a urine sample of the same individual. We show that the one-hit LR with two Atomic Force Microscopy (AFM) image features leads to a perfect (AUC=1) or almost perfect (AUC=0.978) classifica-tion of normal and cancer patients among 20 individuals. The -value 0.0018 confirms that the latter AUC is unlikely to be obtained by chance.Keywords: AUC, classification accuracy, classification p-value, Fisher information, ML, ROC curve
Procedia PDF Downloads 86274 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach
Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana
Abstract:
This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation
Procedia PDF Downloads 1926273 Effects of Modified Low-Dye Taping on First Ray Mobility Test and Sprint Time
Authors: Yu-Ju Tsai, Ching-Chun Wang, Wen-Tzu Tang, Huei-Ming Chai
Abstract:
A pronated foot is frequently associated with a hypermobile first ray, then developing further severe foot problems. Low-Dye taping with athletic tape has been widely used to restrict excessive first ray motion and re-build height of the medial longitudinal arch in general population with pronated foot. It is not the case, however, for sprinters since they feel too much restriction of foot motions. Currently, the kinesio tape, more elastic than the athletic tape, has been widely used to re-adjust joint positions. It was interesting whether modified low-Dye taping using kinesio tape was beneficial for altering first ray mobility and still giving enough arch support. The purpose of this study was to investigate the effect of modified low-Dye taping on first ray mobility test and 60-m sprint time for sprinters with pronated foot. The significance of this study provides new insight into a treatment alternative of modified low-Dye taping for sprinter with pronated foot. Ten young male sprinters, aged 20.8±1.6 years, with pronated foot were recruited for this study. The pronated foot was defined as the foot that the navicular drop test was greater than 1.0 cm. Three optic shutters were placed at the start, 30-m, and 60-m sites to record sprint time. All participants were asked to complete 3 trials of the 60-m dash with both taping and non-taping conditions in a random order. The low-Dye taping was applied using the method postulated by Ralph Dye in 1939 except the kinesio tape was used instead. All outcome variables were recorded for taping and non-taping conditions. Paired t-tests were used to analyze all outcome variables between 2 conditions. Although there were no statistically significant differences in dorsal and plantar mobility between taping and non-taping conditions, a statistical significance was found in a total range of motion (dorsiflexion plus plantarflexion angle) of the first ray when a modified low-Dye taping was applied (p < 0.05). Time to complete 60-m sprint was significantly increased with low-Dye taping (p < 0.05) while no significance was found for time to 30-m. it indicated that modified low-Dye taping changed maximum sprint speed of 60-m dash. Conclusively, modified low-Dye taping was capable of increasing first ray mobility and further altered maximum sprint speed.Keywords: first ray mobility, kinesio taping, pronated foot, sprint time
Procedia PDF Downloads 2786272 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 4216271 Methodology and Credibility of Unmanned Aerial Vehicle-Based Cadastral Mapping
Authors: Ajibola Isola, Shattri Mansor, Ojogbane Sani, Olugbemi Tope
Abstract:
The cadastral map is the rationale behind city management planning and development. For years, cadastral maps have been produced by ground and photogrammetry platforms. Recent evolution in photogrammetry and remote sensing sensors ignites the use of Unmanned Aerial Vehicle systems (UAVs) for cadastral mapping. Despite the time-saving and multi-dimensional cost-effectiveness of the UAV platform, issues related to cadastral map accuracy are a hindrance to the wide applicability of UAVs' cadastral mapping. This study aims to present an approach leading to the generation and assessing the credibility of UAV cadastral mapping. Different sets of Red, Green, and Blue (RGB) photos were obtained from the Tarot 680-hexacopter UAV platform flown over the Universiti Putra Malaysia campus sports complex at an altitude range of 70 m, 100 m, and 250. Before flying the UAV, twenty-eight ground control points were evenly established in the study area with a real-time kinematic differential global positioning system. The second phase of the study utilizes an image-matching algorithm for photos alignment wherein camera calibration parameters and ten of the established ground control points were used for estimating the inner, relative, and absolute orientations of the photos. The resulting orthoimages are exported to ArcGIS software for digitization. Visual, tabular, and graphical assessments of the resulting cadastral maps showed a different level of accuracy. The results of the study show a gradual approach for generating UAV cadastral mapping and that the cadastral map acquired at 70 m altitude produced better results.Keywords: aerial mapping, orthomosaic, cadastral map, flying altitude, image processing
Procedia PDF Downloads 896270 Application of the Sufficiency Economy Philosophy to Integrated Instructional Model of In-Service Teachers of Schools under the Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn, Nakhonnayok Educational Service Area Office
Authors: Kathaleeya Chanda
Abstract:
The schools under the Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn in Nakhonnayok Educational Service Area Office are the small schools, situated in a remote and undeveloped area.Thus, the school-age youth didn’t have or have fewer opportunities to study at the higher education level which can lead to many social and economic problems. This study aims to solve these educational issues of the schools, under The Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn, Nakhonnayok Educational Service Area Office, by the development of teachers, so that teachers could develop teaching and learning system with the ultimate goal to increase students’ academic achievement, increase the educational opportunities for the youth in the area, and help them learn happily. 154 in-service teachers from 22 schools and 4 different districts in Nakhonnayok participated in this teacher training. Most teachers were satisfied with the training content and the trainer. Thereafter, the teachers were given the test to assess the skills and knowledge after training. Most of the teachers earned a score higher than 75%. Accordingly, it can be concluded that after attending the training, teachers have a clear understanding of the contents. After the training session, the teachers have to write a lesson plan that is integrated or adapted to the Sufficiency Economy Philosophy. The teachers can either adopt intradisciplinary or interdisciplinary integration according to their actual teaching conditions in the school. Two weeks after training session, the researchers went to the schools to discuss with the teachers and follow up the assigned integrated lesson plan. It was revealed that the progress of integrated lesson plan could be divided into 3 groups: 1) the teachers who have completed the integrated lesson plan, but are concerned about the accuracy and consistency, 2) teachers who almost complete the lesson plan or made a great progress but are still concerned, confused in some aspects and not fill in the details of the plan, and 3), the teachers who made few progress, are uncertain and confused in many aspects, and may had overloaded tasks from their school. However, a follow-up procedure led to the commitment of teachers to complete the lesson plan. Regarding student learning assessment, from an experiment teaching, most of the students earned a score higher than 50 %. The rate is higher than the one from actual teaching. In addition, the teacher have assessed that the student is happy, enjoys learning, and providing a good cooperates in teaching activities. The students’ interview about the new lesson plan shows that they are happy with it, willing to learn, and able to apply such knowledge in daily life. Integrated lesson plan can increases the educational opportunities for youth in the area.Keywords: sufficiency, economy, philosophy, integrated education syllabus
Procedia PDF Downloads 1916269 Fin Efficiency of Helical Fin with Fixed Fin Tip Temperature Boundary Condition
Authors: Richard G. Carranza, Juan Ospina
Abstract:
The fin efficiency for a helical fin with a fixed fin tip (or arbitrary) temperature boundary condition is presented. Firstly, the temperature profile throughout the fin is determined via an energy balance around the fin itself. Secondly, the fin efficiency is formulated by integrating across the entire surface of the helical fin. An analytical expression for the fin efficiency is presented and compared with the literature for accuracy.Keywords: efficiency, fin, heat, helical, transfer
Procedia PDF Downloads 6886268 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 5036267 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 1116266 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations
Authors: Asmamaw Yehun
Abstract:
The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 1006265 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima
Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano
Abstract:
In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time
Procedia PDF Downloads 1796264 Assessing Knowledge and Compliance of Motor Riders on Road Safety Regulations in Hohoe Municipality of Ghana: A Cross-Sectional Quantitative Study
Authors: Matthew Venunye Fianu, Jerry Fiave, Ebenezer Kye-Mensah, Dacosta Aboagye, Felix Osei-Sarpong
Abstract:
Introduction: Road traffic accidents involving motorbikes are a priority public health concern in Ghana. While there are local initiatives to address this public health challenge, little is known about motor riders’ knowledge and compliance with road safety regulations (RSR) and their association with RTAs. The aim of this study was, therefore, to assess motorbike riders’ knowledge and compliance with RSRs. Methodology: Motorbike riders in Hohoe Municipality were randomly sampled in a cross-sectional study in June 2022. Data were collected from 237 riders using a questionnaire designed in Kobocollect and administered by ten research assistants. A score of 70% or less is considered low for knowledge and compliance. The data were exported into Excel and imported into STATA 17 for analysis. A chi-square test was performed to generate descriptive and inferential statistics to establish the association between independent and dependent variables. Results: All 237 respondents were male, and each of them completed the questionnaire representing a 100% response rate. Participants who had knowledge about speed limit at different segments of the road were 59(24.9%), the use of helmet were 124 (52.3%), and alcohol use were 152 (64.1%). Participants who complied with regulations on speed limits, helmet use, and alcohol use were 108 (45.6%), 179(75.5%), and 168(70.8%), respectively. Riders who had at least junior high school education were 2.43 times more likely to adhere to RSR [cOR =2.43(95%CI= 1.15-6.33) p= 0.023] than those who had less education. Similarly, riders who had high knowledge about RSR were 2.07 times more likely to comply with RSR than those who had less knowledge [AOR= -2.07 (95% CI= 0.34-0.97), p=0.038]. Conclusion: Motor riders in the Hohoe Municipality had low knowledge as well as low compliance with road safety regulations. This could be a contributor to road traffic accidents. It is therefore recommended that road safety regulatory authorities and relevant stakeholders enhance the enforcement of RSR. There should also be country-specific efforts to increase awareness among all motor riders, especially those with less than junior high school education.Keywords: compliance, motor riders, road safety regulations, road traffic accident
Procedia PDF Downloads 956263 The Nexus between Social Entrepreneurship and Youth Empowerment
Authors: Aaron G. Laylo
Abstract:
This paper mainly assumes that social entrepreneurship contributes significantly to youth empowerment i.e., work and community engagement. Two questions are thus raised in order to establish this hypothesis: 1) First, how does social entrepreneurship contribute to youth empowerment?; and 2) secondly, why is social entrpreneurship significantly incremental to youth empowerment? This research aims a) to investigate on the social aspect of entrepreneurship; b) to explore challenges in youth empowerment particularly in respect to work and community engagement; and c) to inquire into whether social enterprises have truly served as a catalyst for, thus an effective response to, youth empowerment. It must be emphasized that young people, which comprise 1.8 billion in a world of seven billion are an asset; Apparently, how to maximize that potential is crucial. By utilizing exploratory research design, the paper endeavors to generate new ideas in regards to both components, develop tentative theories on social entrepreneurship, and refine certain issues that are under observation and seek scholarly attention— a rather emerging phenomenon vis a vis the challenge to empower a significant cluster of the society. Case studies will be utilized as an approach in order to comparatively analyze youth-driven social enterprises in the Philippines that have been widely recognized as successful insofar as social impact is concerned. As most scholars attested, social entrepreneurship is still at its infancy stage. Youth empowerment, meanwhile, is yet a vast area to explore insofar as academic research is concerned. Programs and projects that advocate the pursuit of these components abound. However, academic research is yet to be undertaken to see and understand their social and economic relevance. This research is also an opportunity for scholars to explore, understand, and make sense of the promise that lies in social entrepreneurship research and how it can serve as a catalyst for youth empowerment. Youth-driven social enterprises can be an influential tool in sustaining development across the globe as they intend to provide opportunities for optimal economic productivity that recognizes social inclusion. Ultimately, this study should be able to contribute to both research and development-in-practice communities for the greater good of the society. By establishing the nexus between these two components, the research may contribute to fostering greater exploration of the benefits that both may yield to human progress as well as the gaps that have to be filled in by various policy stakeholders relevant to these units.Keywords: social entpreneurship, youth, empowerment, social inclusion
Procedia PDF Downloads 3096262 From Cascade to Cluster School Model of Teachers’ Professional Development Training Programme: Nigerian Experience, Ondo State: A Case Study
Authors: Oloruntegbe Kunle Oke, Alake Ese Monica, Odutuyi Olubu Musili
Abstract:
This research explores the differing effectiveness of cascade and cluster models in professional development programs for educators in Ondo State, Nigeria. The cascade model emphasizes a top-down approach, where training is cascaded from expert trainers to lower levels of teachers. In contrast, the cluster model, a bottom-up approach, fosters collaborative learning among teachers within specific clusters. Through a review of the literature and empirical studies of the implementations of the former in two academic sessions followed by the cluster model in another two, the study examined their effectiveness on teacher development, productivity and students’ achievements. The study also drew a comparative analysis of the strengths and weaknesses associated with each model, considering factors such as scalability, cost-effectiveness, adaptability in various contexts, and sustainability. 2500 teachers from Ondo State Primary Schools participated in the cascade with intensive training in five zones for a week each in two academic sessions. On the other hand, 1,980 and 1,663 teachers in 52 and 34 clusters, respectively, were in the first and the following session. The programs were designed for one week of rigorous training of teachers by facilitators in the former while the latter was made up of four components: sit-in-observation, need-based assessment workshop, pre-cluster and the actual cluster meetings in addition to sensitization, and took place one day a week for ten weeks. Validated Cluster Impact Survey Instruments, CISI and Teacher’s Assessment Questionnaire (TAQ) were administered to ascertain the effectiveness of the models during and after implementation. The findings from the literature detailed specific effectiveness, strengths and limitations of each approach, especially the potential for inconsistencies and resistance to change. Findings from the data collected revealed the superiority of the cluster model. Response to TAQ equally showed content knowledge and skill update in both but were more sustained in the cluster model. Overall, the study contributes to the ongoing discourse on effective strategies for improving teacher training and enhancing student outcomes, offering practical recommendations for the development and implementation of future professional development projects.Keywords: cascade model, cluster model, teachers’ development, productivity, students’ achievement
Procedia PDF Downloads 516261 A Simple Model for Solar Panel Efficiency
Authors: Stefano M. Spagocci
Abstract:
The efficiency of photovoltaic panels can be calculated with such software packages as RETScreen that allow design engineers to take financial as well as technical considerations into account. RETScreen is interfaced with meteorological databases, so that efficiency calculations can be realistically carried out. The author has recently contributed to the development of solar modules with accumulation capability and an embedded water purifier, aimed at off-grid users such as users in developing countries. The software packages examined do not allow to take ancillary equipment into account, hence the decision to implement a technical and financial model of the system. The author realized that, rather than re-implementing the quite sophisticated model of RETScreen - a mathematical description of which is anyway not publicly available - it was possible to drastically simplify it, including the meteorological factors which, in RETScreen, are presented in a numerical form. The day-by-day efficiency of a photovoltaic solar panel was parametrized by the product of factors expressing, respectively, daytime duration, solar right ascension motion, solar declination motion, cloudiness, temperature. For the sun-motion-dependent factors, positional astronomy formulae, simplified by the author, were employed. Meteorology-dependent factors were fitted by simple trigonometric functions, employing numerical data supplied by RETScreen. The accuracy of our model was tested by comparing it to the predictions of RETScreen; the accuracy obtained was 11%. In conclusion, our study resulted in a model that can be easily implemented in a spreadsheet - thus being easily manageable by non-specialist personnel - or in more sophisticated software packages. The model was used in a number of design exercises, concerning photovoltaic solar panels and ancillary equipment like the above-mentioned water purifier.Keywords: clean energy, energy engineering, mathematical modelling, photovoltaic panels, solar energy
Procedia PDF Downloads 776260 Determination of Gold in Microelectronics Waste Pieces
Authors: S. I. Usenko, V. N. Golubeva, I. A. Konopkina, I. V. Astakhova, O. V. Vakhnina, A. A. Korableva, A. A. Kalinina, K. B. Zhogova
Abstract:
Gold can be determined in natural objects and manufactured articles of different origin. The up-to-date status of research and problems of high gold level determination in alloys and manufactured articles are described in detail in the literature. No less important is the task of this metal determination in minerals, process products and waste pieces. The latters, as objects of gold content chemical analysis, are most hard-to-study for two reasons: Because of high requirements to accuracy of analysis results and because of difference in chemical and phase composition. As a rule, such objects are characterized by compound, variable and very often unknown matrix composition that leads to unpredictable and uncontrolled effect on accuracy and other analytical characteristics of analysis technique. In this paper, the methods for the determination of gold are described, using flame atomic-absorption spectrophotometry and gravimetric analysis technique. The techniques are aimed at gold determination in a solution for gold etching (KJ+J2), in the technological mixture formed after cleaning stainless steel members of vacuum-deposit installation with concentrated nitric and hydrochloric acids as well as in gold-containing powder resulted from liquid wastes reprocessing. Optimal conditions for sample preparation and analysis of liquid and solid waste specimens of compound and variable matrix composition were chosen. The boundaries of relative resultant error were determined for the methods within the range of gold mass concentration from 0.1 to 30g/dm3 in the specimens of liquid wastes and mass fractions from 3 to 80% in the specimens of solid wastes.Keywords: microelectronics waste pieces, gold, sample preparation, atomic-absorption spectrophotometry, gravimetric analysis technique
Procedia PDF Downloads 2096259 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 1846258 Evaluating the Success of an Intervention Course in a South African Engineering Programme
Authors: Alessandra Chiara Maraschin, Estelle Trengove
Abstract:
In South Africa, only 23% of engineering students attain their degrees in the minimum time of 4 years. This begs the question: Why is the 4-year throughput rate so low? Improving the throughput rate is crucial in assisting students to the shortest possible path to completion. The Electrical Engineering programme has a fixed curriculum and students must pass all courses in order to graduate. In South Africa, as is the case in several other countries, many students rely on external funding such as bursaries from companies in industry. If students fail a course, they often lose their bursaries, and most might not be able to fund their 'repeating year' fees. It is thus important to improve the throughput rate, since for many students, graduating from university is a way out of poverty for an entire family. In Electrical Engineering, it has been found that the Software Development I course (an introduction to C++ programming) is a significant hurdle course for students and has been found to have a low pass rate. It has been well-documented that students struggle with this type of course as it introduces a number of new threshold concepts that can be challenging to grasp in a short time frame. In an attempt to mitigate this situation, a part-time night-school for Software Development I was introduced in 2015 as an intervention measure. The course includes all the course material from the Software Development I module and allows students who failed the course in first semester a second chance by repeating the course through taking the night-school course. The purpose of this study is to determine whether the introduction of this intervention course could be considered a success. The success of the intervention is assessed in two ways. The study will first look at whether the night-school course contributed to improving the pass rate of the Software Development I course. Secondly, the study will examine whether the intervention contributed to improving the overall throughput from the 2nd year to the 3rd year of study at a South African University. Second year academic results for a sample of 1216 students have been collected from 2010-2017. Preliminary results show that the lowest pass rate for Software Development I was found to be in 2017 with a pass rate of 34.9%. Since the intervention course's inception, the pass rate for Software Development I has increased each year from 2015-2017 by 13.75%, 25.53% and 25.81% respectively. To conclude, the preliminary results show that the intervention course is a success in improving the pass rate of Software Development I.Keywords: academic performance, electrical engineering, engineering education, intervention course, low pass rate, software development course, throughput
Procedia PDF Downloads 1666257 Undersea Communications Infrastructure: Risks, Opportunities, and Geopolitical Considerations
Authors: Lori W. Gordon, Karen A. Jones
Abstract:
Today’s high-speed data connectivity depends on a vast global network of infrastructure across space, air, land, and sea, with undersea cable infrastructure (UCI) serving as the primary means for intercontinental and ‘long-haul’ communications. The UCI landscape is changing and includes an increasing variety of state actors, such as the growing economies of Brazil, Russia, India, China, and South Africa. Non-state commercial actors, such as hyper-scale content providers including Google, Facebook, Microsoft, and Amazon, are also seeking to control their data and networks through significant investments in submarine cables. Active investments by both state and non-state actors will invariably influence the growth, geopolitics, and security of this sector. Beyond these hyper-scale content providers, there are new commercial satellite communication providers. These new players include traditional geosynchronous (GEO) satellites that offer broad coverage, high throughput GEO satellites offering high capacity with spot beam technology, low earth orbit (LEO) ‘mega constellations’ – global broadband services. And potential new entrants such as High Altitude Platforms (HAPS) offer low latency connectivity, LEO constellations offer high-speed optical mesh networks, i.e., ‘fiber in the sky.’ This paper focuses on understanding the role of submarine cables within the larger context of the global data commons, spanning space, terrestrial, air, and sea networks, including an analysis of national security policy and geopolitical implications. As network operators and commercial and government stakeholders plan for emerging technologies and architectures, hedging risks for future connectivity will ensure that our data backbone will be secure for years to come.Keywords: communications, global, infrastructure, technology
Procedia PDF Downloads 916256 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 1286255 Optimization of Mechanical Cacao Shelling Parameters Using Unroasted Cocoa Beans
Authors: Jeffrey A. Lavarias, Jessie C. Elauria, Arnold R. Elepano, Engelbert K. Peralta, Delfin C. Suministrado
Abstract:
Shelling process is one of the primary processes and critical steps in the processing of chocolate or any product that is derived from cocoa beans. It affects the quality of the cocoa nibs in terms of flavor and purity. In the Philippines, small-scale food processor cannot really compete with large scale confectionery manufacturers because of lack of available postharvest facilities that are appropriate to their level of operation. The impact of this study is to provide the needed intervention that will pave the way for cacao farmers of engaging on the advantage of value-adding as way to maximize the economic potential of cacao. Thus, provision and availability of needed postharvest machines like mechanical cacao sheller will revolutionize the current state of cacao industry in the Philippines. A mechanical cacao sheller was developed, fabricated, and evaluated to establish optimum shelling conditions such as moisture content of cocoa beans, clearance where of cocoa beans passes through the breaker section and speed of the breaking mechanism on shelling recovery, shelling efficiency, shelling rate, energy utilization and large nib recovery; To establish the optimum level of shelling parameters of the mechanical sheller. These factors were statistically analyzed using design of experiment by Box and Behnken and Response Surface Methodology (RSM). By maximizing shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization, the optimum shelling conditions were established at moisture content, clearance and breaker speed of 6.5%, 3 millimeters and 1300 rpm, respectively. The optimum values for shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization were recorded at 86.51%, 99.19%, 21.85kg/hr, 89.75%, and 542.84W, respectively. Experimental values obtained using the optimum conditions were compared with predicted values using predictive models and were found in good agreement.Keywords: cocoa beans, optimization, RSM, shelling parameters
Procedia PDF Downloads 3636254 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features
Authors: Bo Wang
Abstract:
The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection
Procedia PDF Downloads 2896253 Fast Aerodynamic Evaluation of Transport Aircraft in Early Phases
Authors: Xavier Bertrand, Alexandre Cayrel
Abstract:
The early phase of an aircraft development is instrumental as it really drives the potential of a new concept. Any weakness in the high-level design (wing planform, moveable surfaces layout etc.) will be extremely difficult and expensive to recover later in the aircraft development process. Aerodynamic evaluation in this very early development phase is driven by two main criteria: a short lead-time to allow quick iterations of the geometrical design, and a high quality of the calculations to get an accurate & reliable assessment of the current status. These two criteria are usually quite contradictory. Actually, short lead time of a couple of hours from end-to-end can be obtained with very simple tools (semi-empirical methods for instance) although their accuracy is limited, whereas higher quality calculations require heavier/more complex tools, which obviously need more complex inputs as well, and a significantly longer lead time. At this point, the choice has to be done between accuracy and lead-time. A brand new approach has been developed within Airbus, aiming at obtaining quickly high quality evaluations of the aerodynamic of an aircraft. This methodology is based on a joint use of Surrogate Modelling and a lifting line code. The Surrogate Modelling is used to get the wing sections characteristics (e.g. lift coefficient vs. angle of attack), whatever the airfoil geometry, the status of the moveable surfaces (aileron/spoilers) or the high-lift devices deployment. From these characteristics, the lifting line code is used to get the 3D effects on the wing whatever the flow conditions (low/high Mach numbers etc.). This methodology has been applied successfully to a concept of medium range aircraft.Keywords: aerodynamics, lifting line, surrogate model, CFD
Procedia PDF Downloads 3636252 Method Development for the Determination of Gamma-Aminobutyric Acid in Rice Products by Lc-Ms-Ms
Authors: Cher Rong Matthew Kong, Edmund Tian, Seng Poon Ong, Chee Sian Gan
Abstract:
Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is a functional constituent of certain rice varieties. When consumed, it decreases blood pressure and reduces the risk of hypertension-related diseases. This has led to more research dedicated towards the development of functional food products (e.g. germinated brown rice) with enhanced GABA content, and the development of these functional food products has led to increased demand for instrument-based methods that can efficiently and effectively determine GABA content. Current analytical methods require analyte derivatisation, and have significant disadvantages such as being labour intensive and time-consuming, and being subject to analyte loss due to the increased complexity of the sample preparation process. To address this, an LC-MS-MS method for the determination of GABA in rice products has been developed and validated. This developed method involves a relatively simple sample preparation process before analysis using HILIC LC-MS-MS. This method eliminates the need for derivatisation, thereby significantly reducing the labour and time associated with such an analysis. Using LC-MS-MS also allows for better differentiation of GABA from any potential co-eluting compounds in the sample matrix. Results obtained from the developed method demonstrated high linearity, accuracy, and precision for the determination of GABA (1ng/L to 8ng/L) in a variety of brown rice products. The method can significantly simplify sample preparation steps, improve the accuracy of quantitation, and increase the throughput of analyses, thereby providing a quick but effective alternative to established instrumental analysis methods for GABA in rice.Keywords: functional food, gamma-aminobutyric acid, germinated brown rice, method development
Procedia PDF Downloads 2746251 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle
Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu
Abstract:
Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle
Procedia PDF Downloads 150