Search results for: machining error
322 Examining the Missing Feedback Link in Environmental Kuznets Curve Hypothesis
Authors: Apra Sinha
Abstract:
The inverted U-shaped Environmental Kuznets curve (EKC) demonstrates(pollution-income relationship)that initially the pollution and environmental degradation surpass the level of income per capita; however this trend reverses since at the higher income levels, economic growth initiates environmental upgrading. However, what effect does increased environmental degradation has on growth is the missing feedback link which has not been addressed in the EKC hypothesis. This paper examines the missing feedback link in EKC hypothesis in Indian context by examining the casual association between fossil fuel consumption, carbon dioxide emissions and economic growth for India. Fossil fuel consumption here has been taken as a proxy of driver of economic growth. The casual association between the aforementioned variables has been analyzed using five interventions namely 1) urban development for which urbanization has been taken proxy 2) industrial development for which industrial value added has been taken proxy 3) trade liberalization for which sum of exports and imports as a share of GDP has been taken as proxy 4)financial development for which a)domestic credit to private sector and b)net foreign assets has been taken as proxies. The choice of interventions for this study has been done keeping in view the economic liberalization perspective of India. The main aim of the paper is to investigate the missing feedback link for Environmental Kuznets Curve Hypothesis before and after incorporating the intervening variables. The period of study is from 1971 to 2011 as it covers pre and post liberalization era in India. All the data has been taken from World Bank country level indicators. The Johansen and Juselius cointegration testing methodology and Error Correction based Granger causality have been applied on all the variables. The results clearly show that out of five interventions, only in two interventions the missing feedback link is being addressed. This paper can put forward significant policy implications for environment protection and sustainable development.Keywords: environmental Kuznets curve hypothesis, fossil fuel consumption, industrialization, trade liberalization, urbanization
Procedia PDF Downloads 252321 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 221320 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer
Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek
Abstract:
Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying
Procedia PDF Downloads 126319 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors
Procedia PDF Downloads 393318 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 137317 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna
Abstract:
Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network
Procedia PDF Downloads 158316 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 280315 Entrepreneurship Education: A Panacea for Entrepreneurial Intention of University Undergraduates in Ogun State, Nigeria
Authors: Adedayo Racheal Agbonna
Abstract:
The rising level of graduate unemployment in Nigeria has brought about the introduction of entrepreneurship education as a career option for self–reliance and self-employment. Sequel to this, it is important to have an understanding of the determining factors of entrepreneurial intention. Therefore this research empirically investigated the influence of entrepreneurship education on entrepreneurial intention of undergraduate students of selected universities in Ogun State, Nigeria. The study is significant to researchers, university policy makers, and the government. Survey research design was adopted in the study. The population consisted of 17,659 final year undergraduate students universities in Ogun State. The study adopted stratified and random sampling technique. The table of sample size determination was used to determine the sample size for this study at 95% confidence level and 5% margin error to arrive at a sample size of 1877 respondents. The elements of population were 400 level students of the selected universities. A structured questionnaire titled 'Entrepreneurship Education and students’ Entrepreneurial intention' was administered. The result of the reliability test had the following values 0.716, 0.907 and 0.949 for infrastructure, perceived university support, and entrepreneurial intention respectively. In the same vein, from the construct validity test, the following values were obtained 0.711, 0.663 and 0.759 for infrastructure, perceived university support and entrepreneurial intention respectively. Findings of this study revealed that each of the entrepreneurship education variables significantly affected intention University infrastructure B= -1.200, R²=0.679, F (₁,₁₈₇₅) = 3958.345, P < 0.05) Perceived University Support B= -1.027, R²=0.502, F(₁,₁₈₇₅) = 1924.612, P < 0.05). The perception of respondents in public university and private university on entrepreneurship education have a statistically significant difference [F(₁,₁₈₇₅) = 134.614, p < 0.05) α F(₁,₁₈₇₅) = 363.439]. The study concluded that entrepreneurship education positively influenced entrepreneurial intention of undergraduate students in Ogun State, Nigeria. Also, university infrastructure and perceived university support have negative and significant effect on entrepreneurial intention. The study recommended that to promote entrepreneurial intention of university undergraduate students, infrastructures and the university support that can arouse entrepreneurial intention of students should be put in place.Keywords: entrepreneurship education, entrepreneurial intention, perceived university support, university infrastructure
Procedia PDF Downloads 234314 Fully Coupled Porous Media Model
Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li
Abstract:
This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.Keywords: coupled, implicit, monolithic, porous media
Procedia PDF Downloads 138313 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan
Authors: Yichin Chen
Abstract:
Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.Keywords: aerial photogrammetry, landslide, landform change, Taiwan
Procedia PDF Downloads 156312 Development of an Optimised, Automated Multidimensional Model for Supply Chains
Authors: Safaa H. Sindi, Michael Roe
Abstract:
This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.Keywords: Leagile, automation, heuristic learning, supply chain models
Procedia PDF Downloads 388311 Pavement Management for a Metropolitan Area: A Case Study of Montreal
Authors: Luis Amador Jimenez, Md. Shohel Amin
Abstract:
Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization
Procedia PDF Downloads 460310 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach
Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic
Abstract:
The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning
Procedia PDF Downloads 185309 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 112308 Levels of CTX1 in Premenopausal Osteoporotic Women Study Conducted in Khyberpuktoonkhwa Province, Pakistan
Authors: Mehwish Durrani, Rubina Nazli, Muhammad Abubakr, Muhammad Shafiq
Abstract:
Objectives: To evaluate the high socio-economic status, urbanization, and decrease ambulation can lead to early osteoporosis in women reporting from Peshawar region. Study Design: Descriptive cross-sectional study was done. Sample size was 100 subjects, using 30% proportion of osteoporosis, 95% confidence level, and 9% margin of error under WHO software for sample size determination. Place and Duration of study: This study was carried out in the tertiary referral health care facilities of Peshawar viz PGMI Hayatabad Medical Complex, Peshawar, Khyber Pakhtunkhwa Province, Pakistan. Ethical approval for the study was taken from the Institutional Ethical Research board (IERD) at Post Graduate Medical Institute, Hayatabad Medical Complex, and Peshawar.The study was done in six months time period. Patients and Methods: Levels of CTX1 as a marker of bone degradation in radiographically assessed perimenopausal women was determined. These females were randomly selected and screened for osteoporosis. Hemoglobin in gm/dl, ESR by Westergren method as millimeter in 1 hour, Serum Ca mg/dl, Serum alkaline Phosphatase international units per liter radiographic grade of osteoporosis according to Singh index as 1-6 and CTX 1 level in pg/ml. Results: High levels of CTX1 was observed in perimenopausal osteoporotic women which were radiographically diagnosed as osteoporotic patients. The High socio-economic class also predispose to osteoporosis. Decrease ambulation another risk factor showed significant association with the increased levels of CTX1. Conclusion: The results of this study propose that minimum ambulation and high socioeconomic class both had significance association with the increase levels of serum CTX1, which in turn will lead to osteoporosis and to its complications.Keywords: osteoporosis, CTX1, perimenopausal women, Hayatabad Medical Complex, Khyberpuktoonkhwa
Procedia PDF Downloads 331307 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks
Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain
Abstract:
With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic
Procedia PDF Downloads 545306 Early Prediction of Diseases in a Cow for Cattle Industry
Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan
Abstract:
In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.Keywords: IoT, machine learning, health care, dairy cows
Procedia PDF Downloads 70305 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data
Authors: Gayathri Nagarajan, L. D. Dhinesh Babu
Abstract:
Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform
Procedia PDF Downloads 240304 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning
Authors: Michael A. Sprayberry, Vincent C. Paquit
Abstract:
Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization
Procedia PDF Downloads 90303 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry
Authors: Anu S. Nair, V. Anantha Subramanian
Abstract:
This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.Keywords: computer-aided design, methodical series, NURBS, ship design
Procedia PDF Downloads 169302 Prediction of Fillet Weight and Fillet Yield from Body Measurements and Genetic Parameters in a Complete Diallel Cross of Three Nile Tilapia (Oreochromis niloticus) Strains
Authors: Kassaye Balkew Workagegn, Gunnar Klemetsdal, Hans Magnus Gjøen
Abstract:
In this study, the first objective was to investigate whether non-lethal or non-invasive methods, utilizing body measurements, could be used to efficiently predict fillet weight and fillet yield for a complete diallel cross of three Nile tilapia (Oreochromis niloticus) strains collected from three Ethiopian Rift Valley lakes, Lakes Ziway, Koka and Chamo. The second objective was to estimate heritability of body weight, actual and predicted fillet traits, as well as genetic correlations between these traits. A third goal was to estimate additive, reciprocal, and heterosis effects for body weight and the various fillet traits. As in females, early sexual maturation was widespread, only 958 male fish from 81 full-sib families were used, both for the prediction of fillet traits and in genetic analysis. The prediction equations from body measurements were established by forward regression analysis, choosing models with the least predicted residual error sums of squares (PRESS). The results revealed that body measurements on live Nile tilapia is well suited to predict fillet weight but not fillet yield (R²= 0.945 and 0.209, respectively), but both models were seemingly unbiased. The genetic analyses were carried out with bivariate, multibreed models. Body weight, fillet weight, and predicted fillet weight were all estimated with a heritability ranged from 0.23 to 0.28, and with genetic correlations close to one. Contrary, fillet yield was only to a minor degree heritable (0.05), while predicted fillet yield obtained a heritability of 0.19, being a resultant of two body weight variables known to have high heritability. The latter trait was estimated with genetic correlations to body weight and fillet weight traits larger than 0.82. No significant differences among strains were found for their additive genetic, reciprocal, or heterosis effects, while total heterosis effects were estimated as positive and significant (P < 0.05). As a conclusion, prediction of prediction of fillet weight based on body measurements is possible, but not for fillet yield.Keywords: additive, fillet traits, genetic correlation, heritability, heterosis, prediction, reciprocal
Procedia PDF Downloads 187301 Model Based Design and Development of Horticultural Produce Crate from Bamboo
Authors: Sisay Wondmagegn Molla, Mulugeta Admasu Delele, Tadelle Nigusu Mekonen
Abstract:
It is common to observe quality deterioration and mechanical injury of horticulture products as a result of suboptimal design and handling of the packaging systems. Society uses the old and primitive way of handling horticulture products, which is produced through trial and error This method is known to have many limitations on quality, environmental pollution, labor and cost. Ethiopia stands first in bamboo resources in Africa, which is 67 % of the African and 7 % of the world's bamboo resources. The purpose of this project was to design and develop bamboo-based ventilated horticultural produce crates using validated computational fluid dynamics (CFD). The model was used to predict the airflow and temperature distribution inside the loaded crate. The study included: sizing, collection of the thermo-physical properties, and designing and developing a CFD model of the bamboo-based ventilated horticultural crate. The designed crate (40×30×25cm) had a capacity of about 18 kg, and cold air temperature (130C) was used for cooling the fruit. Airflow in the loaded crate is far from uniform. There is a relatively high-velocity flow at the top, near inlet and near outlet sections, and a relatively low airflow near the center of the loaded crate. The predicted velocity variation within the bulk of the produce was relatively large, it was in the range of 0.04-7m/s. The vented produce package contributed the highest cooling airflow resistance. Similar to the airflow, the cooling characteristics of the product were not uniform. There was a difference in the cooling rate of the produce in the airflow direction and from the top to the bottom section of the loaded crate. The products that were located near the inlet side and top of the bulk showed a faster cooling rate than the rest of the bulk. The result showed that the produced volume average temperature was 17.9°C after a cooling period of 3 hr. It was reduced by 12.05°C. The result showed the potential of the CFD modeling approach in developing the bamboo-based design of horticultural produce crates in terms of airflow and heat transfer characteristics.Keywords: bamboo, modeling, cooling, horticultural, packaging
Procedia PDF Downloads 25300 Study of Properties of Concretes Made of Local Building Materials and Containing Admixtures, and Their Further Introduction in Construction Operations and Road Building
Authors: Iuri Salukvadze
Abstract:
Development of Georgian Economy largely depends on its effective use of its transit country potential. The value of Georgia as the part of Europe-Asia corridor has increased; this increases the interest of western and eastern countries to Georgia as to the country that laid on the transit axes that implies transit infrastructure creation and development in Georgia. It is important to use compacted concrete with the additive in modern road construction industry. Even in the 21-century, concrete remains as the main vital constructive building material, therefore innovative, economic and environmentally protected technologies are needed. Georgian construction market requires the use of concrete of new generation, adaptation of nanotechnologies to the local realities that will give the ability to create multifunctional, nano-technological high effective materials. It is highly important to research their physical and mechanical states. The study of compacted concrete with the additives is necessary to use in the road construction in the future and to increase hardness of roads in Georgia. The aim of the research is to study the physical-mechanical properties of the compacted concrete with the additives based on the local materials. Any experimental study needs large number of experiments from one side in order to achieve high accuracy and optimal number of the experiments with minimal charges and in the shortest period of time from the other side. To solve this problem in practice, it is possible to use experiments planning static and mathematical methods. For the materials properties research we will use distribution hypothesis, measurements results by normal law according to which divergence of the obtained results is caused by the error of method and inhomogeneity of the object. As the result of the study, we will get resistible compacted concrete with additives for the motor roads that will improve roads infrastructure and give us saving rate while construction of the roads and their exploitation.Keywords: construction, seismic protection systems, soil, motor roads, concrete
Procedia PDF Downloads 244299 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN
Procedia PDF Downloads 160298 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 357297 Particle Swarm Optimization Based Vibration Suppression of a Piezoelectric Actuator Using Adaptive Fuzzy Sliding Mode Controller
Authors: Jin-Siang Shaw, Patricia Moya Caceres, Sheng-Xiang Xu
Abstract:
This paper aims to integrate the particle swarm optimization (PSO) method with the adaptive fuzzy sliding mode controller (AFSMC) to achieve vibration attenuation in a piezoelectric actuator subject to base excitation. The piezoelectric actuator is a complicated system made of ferroelectric materials and its performance can be affected by nonlinear hysteresis loop and unknown system parameters and external disturbances. In this study, an adaptive fuzzy sliding mode controller is proposed for the vibration control of the system, because the fuzzy sliding mode controller is designed to tackle the unknown parameters and external disturbance of the system, and the adaptive algorithm is aimed for fine-tuning this controller for error converging purpose. Particle swarm optimization method is used in order to find the optimal controller parameters for the piezoelectric actuator. PSO starts with a population of random possible solutions, called particles. The particles move through the search space with dynamically adjusted speed and direction that change according to their historical behavior, allowing the values of the particles to quickly converge towards the best solutions for the proposed problem. In this paper, an initial set of controller parameters is applied to the piezoelectric actuator which is subject to resonant base excitation with large amplitude vibration. The resulting vibration suppression is about 50%. Then PSO is applied to search for an optimal controller in the neighborhood of this initial controller. The performance of the optimal fuzzy sliding mode controller found by PSO indeed improves up to 97.8% vibration attenuation. Finally, adaptive version of fuzzy sliding mode controller is adopted for further improving vibration suppression. Simulation result verifies the performance of the adaptive controller with 99.98% vibration reduction. Namely the vibration of the piezoelectric actuator subject to resonant base excitation can be completely annihilated using this PSO based adaptive fuzzy sliding mode controller.Keywords: adaptive fuzzy sliding mode controller, particle swarm optimization, piezoelectric actuator, vibration suppression
Procedia PDF Downloads 146296 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 583295 Hidden Hot Spots: Identifying and Understanding the Spatial Distribution of Crime
Authors: Lauren C. Porter, Andrew Curtis, Eric Jefferis, Susanne Mitchell
Abstract:
A wealth of research has been generated examining the variation in crime across neighborhoods. However, there is also a striking degree of crime concentration within neighborhoods. A number of studies show that a small percentage of street segments, intersections, or addresses account for a large portion of crime. Not surprisingly, a focus on these crime hot spots can be an effective strategy for reducing community level crime and related ills, such as health problems. However, research is also limited in an important respect. Studies tend to use official data to identify hot spots, such as 911 calls or calls for service. While the use of call data may be more representative of the actual level and distribution of crime than some other official measures (e.g. arrest data), call data still suffer from the 'dark figure of crime.' That is, there is most certainly a degree of error between crimes that occur versus crimes that are reported to the police. In this study, we present an alternative method of identifying crime hot spots, that does not rely on official data. In doing so, we highlight the potential utility of neighborhood-insiders to identify and understand crime dynamics within geographic spaces. Specifically, we use spatial video and geo-narratives to record the crime insights of 36 police, ex-offenders, and residents of a high crime neighborhood in northeast Ohio. Spatial mentions of crime are mapped to identify participant-identified hot spots, and these are juxtaposed with calls for service (CFS) data. While there are bound to be differences between these two sources of data, we find that one location, in particular, a corner store, emerges as a hot spot for all three groups of participants. Yet it does not emerge when we examine CFS data. A closer examination of the space around this corner store and a qualitative analysis of narrative data reveal important clues as to why this store may indeed be a hot spot, but not generate disproportionate calls to the police. In short, our results suggest that researchers who rely solely on official data to study crime hot spots may risk missing some of the most dangerous places.Keywords: crime, narrative, video, neighborhood
Procedia PDF Downloads 238294 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action
Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere
Abstract:
Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results
Procedia PDF Downloads 133293 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory
Authors: Kiana Zeighami, Morteza Ozlati Moghadam
Abstract:
Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping
Procedia PDF Downloads 208