Search results for: threshold models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7208

Search results for: threshold models

6008 A Comparison of YOLO Family for Apple Detection and Counting in Orchards

Authors: Yuanqing Li, Changyi Lei, Zhaopeng Xue, Zhuo Zheng, Yanbo Long

Abstract:

In agricultural production and breeding, implementing automatic picking robot in orchard farming to reduce human labour and error is challenging. The core function of it is automatic identification based on machine vision. This paper focuses on apple detection and counting in orchards and implements several deep learning methods. Extensive datasets are used and a semi-automatic annotation method is proposed. The proposed deep learning models are in state-of-the-art YOLO family. In view of the essence of the models with various backbones, a multi-dimensional comparison in details is made in terms of counting accuracy, mAP and model memory, laying the foundation for realising automatic precision agriculture.

Keywords: agricultural object detection, deep learning, machine vision, YOLO family

Procedia PDF Downloads 185
6007 Measuring Housing Quality Using Geographic Information System (GIS)

Authors: Silvija ŠIljeg, Ante ŠIljeg, Ivan Marić

Abstract:

Measuring housing quality is being done on objective and subjective level using different indicators. During the research 5 urban and housing indicators formed according to 58 variables from different housing, domains were used. The aims of the research were to measure housing quality based on GIS approach and to detect critical points of housing in the example of Croatian coastal Town Zadar. The purposes of GIS in the research are to generate models of housing quality indexes by standardisation and aggregation of variables and to examine accuracy model of housing quality index. Analysis of accuracy has been done on the example of variable referring to educational objects availability. By defining weighted coefficients and using different GIS methods high, middle and low housing quality zones were determined. Obtained results can be of use to town planners, spatial planners and town authorities in the process of generating decisions, guidelines, and spatial interventions.

Keywords: housing quality, GIS, housing quality index, indicators, models of housing quality

Procedia PDF Downloads 283
6006 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 65
6005 Improving Cyclability and Capacity of Lithium Oxygen Batteries via Low Rate Pre-Activation

Authors: Zhihong Luo, Guangbin Zhu, Lulu Guo, Zhujun Lyu, Kun Luo

Abstract:

Cycling life has become the threshold for the prospective application of Li-O₂ batteries, and the protection of Li anode has recently regarded as the key factor to the performance. Herein, a simple low rate pre-activation (20 cycles at 0.5 Ag⁻¹ and a capacity of 200 mAh g⁻¹) was employed to effectively improve the performance and cyclability of Li-O₂ batteries. The charge/discharge cycles at 1 A g⁻¹ with a capacity of 1000 mAh g⁻¹ were maintained for up to 290 times versus 55 times for the cell without pre-activation. The ultimate battery capacity and high rate discharge property were also largely enhanced. Morphology, XRD and XPS analyses reveal that the performance improvement is in close association with the formation of the smooth and compact surface layer formed on the Li anode after low rate pre-activation, which apparently alleviated the corrosion of Li anode and the passivation of cathode during battery cycling, and the corresponding mechanism was also discussed.

Keywords: lithium oxygen battery, pre-activation, cyclability, capacity

Procedia PDF Downloads 143
6004 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience

Authors: Nkwenti Mbelli Njah

Abstract:

This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.

Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners

Procedia PDF Downloads 155
6003 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 115
6002 Emerging Technologies in Distance Education

Authors: Eunice H. Li

Abstract:

This paper discusses and analyses a small portion of the literature that has been reviewed for research work in Distance Education (DE) pedagogies that I am currently undertaking. It begins by presenting a brief overview of Taylor's (2001) five-generation models of Distance Education. The focus of the discussion will be on the 5th generation, Intelligent Flexible Learning Model. For this generation, educational and other institutions make portal access and interactive multi-media (IMM) an integral part of their operations. The paper then takes a brief look at current trends in technologies – for example smart-watch wearable technology such as Apple Watch. The emergent trends in technologies carry many new features. These are compared to former DE generational features. Also compared is the time span that has elapsed between the generations that are referred to in Taylor's model. This paper is a work in progress. The paper therefore welcome new insights, comparisons and critique of the issues discussed.

Keywords: distance education, e-learning technologies, pedagogy, generational models

Procedia PDF Downloads 449
6001 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 43
6000 Scalable Learning of Tree-Based Models on Sparsely Representable Data

Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou

Abstract:

Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.

Keywords: big data, sparsely representable data, tree-based models, scalable learning

Procedia PDF Downloads 252
5999 Numerical Simulation and Experimental Validation of the Tire-Road Separation in Quarter-car Model

Authors: Quy Dang Nguyen, Reza Nakhaie Jazar

Abstract:

The paper investigates vibration dynamics of tire-road separation for a quarter-car model; this separation model is developed to be close to the real situation considering the tire is able to separate from the ground plane. A set of piecewise linear mathematical models is developed and matches the in-contact and no-contact states to be considered as mother models for further investigations. The bound dynamics are numerically simulated in the time response and phase portraits. The separation analysis may determine which values of suspension parameters can delay and avoid the no-contact phenomenon, which results in improving ride comfort and eliminating the potentially dangerous oscillation. Finally, model verification is carried out in the MSC-ADAMS environment.

Keywords: quarter-car vibrations, tire-road separation, separation analysis, separation dynamics, ride comfort, ADAMS validation

Procedia PDF Downloads 77
5998 Nonlinear Optics of Dirac Fermion Systems

Authors: Vipin Kumar, Girish S. Setlur

Abstract:

Graphene has been recognized as a promising 2D material with many new properties. However, pristine graphene is gapless which hinders its direct application towards graphene-based semiconducting devices. Graphene is a zero-gapp and linearly dispersing semiconductor. Massless charge carriers (quasi-particles) in graphene obey the relativistic Dirac equation. These Dirac fermions show very unusual physical properties such as electronic, optical and transport. Graphene is analogous to two-level atomic systems and conventional semiconductors. We may expect that graphene-based systems will also exhibit phenomena that are well-known in two-level atomic systems and in conventional semiconductors. Rabi oscillation is a nonlinear optical phenomenon well-known in the context of two-level atomic systems and also in conventional semiconductors. It is the periodic exchange of energy between the system of interest and the electromagnetic field. The present work describes the phenomenon of Rabi oscillations in graphene based systems. Rabi oscillations have already been described theoretically and experimentally in the extensive literature available on this topic. To describe Rabi oscillations they use an approximation known as rotating wave approximation (RWA) well-known in studies of two-level systems. RWA is valid only near conventional resonance (small detuning)- when the frequency of the external field is nearly equal to the particle-hole excitation frequency. The Rabi frequency goes through a minimum close to conventional resonance as a function of detuning. Far from conventional resonance, the RWA becomes rather less useful and we need some other technique to describe the phenomenon of Rabi oscillation. In conventional systems, there is no second minimum - the only minimum is at conventional resonance. But in graphene we find anomalous Rabi oscillations far from conventional resonance where the Rabi frequency goes through a minimum that is much smaller than the conventional Rabi frequency. This is known as anomalous Rabi frequency and is unique to graphene systems. We have shown that this is attributable to the pseudo-spin degree of freedom in graphene systems. A new technique, which is an alternative to RWA called asymptotic RWA (ARWA), has been invoked by our group to discuss the phenomenon of Rabi oscillation. Experimentally accessible current density shows different types of threshold behaviour in frequency domain close to the anomalous Rabi frequency depending on the system chosen. For single layer graphene, the exponent at threshold is equal to 1/2 while in case of bilayer graphene, it is computed to be equal to 1. Bilayer graphene shows harmonic (anomalous) resonances absent in single layer graphene. The effect of asymmetry and trigonal warping (a weak direct inter-layer hopping in bilayer graphene) on these oscillations is also studied in graphene systems. Asymmetry has a remarkable effect only on anomalous Rabi oscillations whereas the Rabi frequency near conventional resonance is not significantly affected by the asymmetry parameter. In presence of asymmetry, these graphene systems show Rabi-like oscillations (offset oscillations) even for vanishingly small applied field strengths (less than the gap parameter). The frequency of offset oscillations may be identified with the asymmetry parameter.

Keywords: graphene, Bilayer graphene, Rabi oscillations, Dirac fermion systems

Procedia PDF Downloads 285
5997 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models

Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo

Abstract:

Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.

Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps

Procedia PDF Downloads 85
5996 Saltwater Intrusion Studies in the Cai River in the Khanh Hoa Province, Vietnam

Authors: B. Van Kessel, P. T. Kockelkorn, T. R. Speelman, T. C. Wierikx, C. Mai Van, T. A. Bogaard

Abstract:

Saltwater intrusion is a common problem in estuaries around the world, as it could hinder the freshwater supply of coastal zones. This problem is likely to grow due to climate change and sea-level rise. The influence of these factors on the saltwater intrusion was investigated for the Cai River in the Khanh Hoa province in Vietnam. In addition, the Cai River has high seasonal fluctuations in discharge, leading to increased saltwater intrusion during the dry season. Sea level rise, river discharge changes, river mouth widening and a proposed saltwater intrusion prevention dam can have influences on the saltwater intrusion but have not been quantified for the Cai River estuary. This research used both an analytical and numerical model to investigate the effect of the aforementioned factors. The analytical model was based on a model proposed by Savenije and was calibrated using limited in situ data. The numerical model was a 3D hydrodynamic model made using the Delft3D4 software. The analytical model and numerical model agreed with in situ data, mostly for tidally average data. Both models indicated a roughly similar dependence on discharge, also agreeing that this parameter had the most severe influence on the modeled saltwater intrusion. Especially for discharges below 10 m/s3, the saltwater was predicted to reach further than 10 km. In the models, both sea-level rise and river widening mainly resulted in salinity increments up to 3 kg/m3 in the middle part of the river. The predicted sea-level rise in 2070 was simulated to lead to an increase of 0.5 km in saltwater intrusion length. Furthermore, the effect of the saltwater intrusion dam seemed significant in the model used, but only for the highest position of the gate.

Keywords: Cai River, hydraulic models, river discharge, saltwater intrusion, tidal barriers

Procedia PDF Downloads 98
5995 Dietary Patterns and Hearing Loss in Older People

Authors: N. E. Gallagher, C. E. Neville, N. Lyner, J. Yarnell, C. C. Patterson, J. E. Gallacher, Y. Ben-Shlomo, A. Fehily, J. V. Woodside

Abstract:

Hearing loss is highly prevalent in older people and can reduce quality of life substantially. Emerging research suggests that potentially modifiable risk factors, including risk factors previously related to cardiovascular disease risk, may be associated with a decreased or increased incidence of hearing loss. This has prompted investigation into the possibility that certain nutrients, foods or dietary patterns may also be associated with incidence of hearing loss. The aim of this study was to determine any associations between dietary patterns and hearing loss in men enrolled in the Caerphilly study. The Caerphilly prospective cohort study began in 1979-1983 with recruitment of 2512 men aged 45-59 years. Dietary data was collected using a self-administered, semi-quantitative, 56-item food frequency questionnaire (FFQ) at baseline (1979-1983), and 7-day weighed food intake (WI) in a 30% sub-sample, while pure-tone unaided audiometric threshold was assessed at 0.5, 1, 2 and 4 kHz, between 1984 and 1988. Principal components analysis (PCA) was carried out to determine a posteriori dietary patterns and multivariate linear and logistic regression models were used to examine associations with hearing level (pure tone average (PTA) of frequencies 0.5, 1, 2 and 4 kHz in decibels (dB)) for linear regression and with hearing loss (PTA>25dB) for logistic regression. Three dietary patterns were determined using PCA on the FFQ data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, both linear and logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P<0.001) and linear regression analysis showed a significant association between the High sugar/Alcohol avoider pattern and hearing loss (P=0.04). Three similar dietary patterns were determined using PCA on the WI data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P=0.02) and a significant association between the Traditional pattern and hearing loss (P=0.04). A Healthy dietary pattern was found to be significantly inversely associated with hearing loss in middle-aged men in the Caerphilly study. Furthermore, a High sugar/Alcohol avoider pattern (FFQ) and a Traditional pattern (WI) were associated with poorer hearing levels. Consequently, the role of dietary factors in hearing loss remains to be fully established and warrants further investigation.

Keywords: ageing, diet, dietary patterns, hearing loss

Procedia PDF Downloads 219
5994 Review of the Model-Based Supply Chain Management Research in the Construction Industry

Authors: Aspasia Koutsokosta, Stefanos Katsavounis

Abstract:

This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.

Keywords: construction supply chain management, modeling, operations research, optimization, simulation

Procedia PDF Downloads 498
5993 Monte Carlo Simulation of X-Ray Spectra in Diagnostic Radiology and Mammography Using MCNP4C

Authors: Sahar Heidary, Ramin Ghasemi Shayan

Abstract:

The overall goal Monte Carlo N-atom radioactivity transference PC program (MCNP4C) was done for the regeneration of x-ray groups in diagnostic radiology and mammography. The electrons were transported till they slow down and stopover in the target. Both bremsstrahlung and characteristic x-ray creation were measured in this study. In this issue, the x-ray spectra forecast by several computational models recycled in the diagnostic radiology and mammography energy kind have been calculated by appraisal with dignified spectra and their outcome on the scheming of absorbed dose and effective dose (ED) told to the adult ORNL hermaphroditic phantom quantified. This comprises practical models (TASMIP and MASMIP), semi-practical models (X-rayb&m, X-raytbc, XCOMP, IPEM, Tucker et al., and Blough et al.), and Monte Carlo modeling (EGS4, ITS3.0, and MCNP4C). Images got consuming synchrotron radiation (SR) and both screen-film and the CR system were related with images of the similar trials attained with digital mammography equipment. In sight of the worthy feature of the effects gained, the CR system was used in two mammographic inspections with SR. For separately mammography unit, the capability acquiesced bilateral mediolateral oblique (MLO) and craniocaudal(CC) mammograms attained in a woman with fatty breasts and a woman with dense breasts. Referees planned the common groups and definite absences that managed to a choice to miscarry the part that formed the scientific imaginings.

Keywords: mammography, monte carlo, effective dose, radiology

Procedia PDF Downloads 114
5992 Analytics Model in a Telehealth Center Based on Cloud Computing and Local Storage

Authors: L. Ramirez, E. Guillén, J. Sánchez

Abstract:

Some of the main goals about telecare such as monitoring, treatment, telediagnostic are deployed with the integration of applications with specific appliances. In order to achieve a coherent model to integrate software, hardware, and healthcare systems, different telehealth models with Internet of Things (IoT), cloud computing, artificial intelligence, etc. have been implemented, and their advantages are still under analysis. In this paper, we propose an integrated model based on IoT architecture and cloud computing telehealth center. Analytics module is presented as a solution to control an ideal diagnostic about some diseases. Specific features are then compared with the recently deployed conventional models in telemedicine. The main advantage of this model is the availability of controlling the security and privacy about patient information and the optimization on processing and acquiring clinical parameters according to technical characteristics.

Keywords: analytics, telemedicine, internet of things, cloud computing

Procedia PDF Downloads 311
5991 A Method to Enhance the Accuracy of Digital Forensic in the Absence of Sufficient Evidence in Saudi Arabia

Authors: Fahad Alanazi, Andrew Jones

Abstract:

Digital forensics seeks to achieve the successful investigation of digital crimes through obtaining acceptable evidence from digital devices that can be presented in a court of law. Thus, the digital forensics investigation is normally performed through a number of phases in order to achieve the required level of accuracy in the investigation processes. Since 1984 there have been a number of models and frameworks developed to support the digital investigation processes. In this paper, we review a number of the investigation processes that have been produced throughout the years and introduce a proposed digital forensic model which is based on the scope of the Saudi Arabia investigation process. The proposed model has been integrated with existing models for the investigation processes and produced a new phase to deal with a situation where there is initially insufficient evidence.

Keywords: digital forensics, process, metadata, Traceback, Sauid Arabia

Procedia PDF Downloads 345
5990 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks

Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf

Abstract:

Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.

Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks

Procedia PDF Downloads 146
5989 Beyond the Effect on Children: Investigation on the Longitudinal Effect of Parental Perfectionism on Child Maltreatment

Authors: Alice Schittek, Isabelle Roskam, Moira Mikolajczak

Abstract:

Background: Perfectionistic strivings (PS) and perfectionistic concerns (PC) are associated with an increase in parental burnout (PB), and PB causally increases violence towards the offspring. Objective: To our best knowledge, no study has ever investigated whether perfectionism (PS and PC) predicts violence towards the offspring and whether PB could explain this link. We hypothesized that an increase in PS and PC would lead to an increase in violence via an increase in PB. Method: 228 participants responded to an online survey, with three measurement occasions spaced two months apart. Results: Contrary to expectations, cross-lagged path models revealed that violence towards the offspring prospectively predicts an increase in PS and PC. Mediation models showed that PB is not a significant mediator. The results of all models did not change when controlling for social desirability. Conclusion: The present study shows that violence towards the offspring increases the risk of PS and PC in parents, which highlights the importance of understanding the effect of child maltreatment on the whole family system and not just on children. Results are discussed in light of the feeling of guilt experienced by parents. Considering the insignificant mediation effect, PB research should slowly shift towards more (quasi) causal designs, allowing to identify which significant correlations translate into causal effects. Implications: Clinicians should focus on preventing child maltreatment as well as treating parental perfectionism. Researchers should unravel the effects of child maltreatment on the family system.

Keywords: maltreatment, parental burnout, perfectionistic strivings, perfectionistic concerns, perfectionism, violence

Procedia PDF Downloads 61
5988 Fabrication of Cylindrical Silicon Nanowire-Embedded Field Effect Transistor Using Al2O3 Transfer Layer

Authors: Sang Hoon Lee, Tae Il Lee, Su Jeong Lee, Jae Min Myoung

Abstract:

In order to manufacture short gap single Si nanowire (NW) field effect transistor (FET) by imprinting and transferring method, we introduce the method using Al2O3 sacrificial layer. The diameters of cylindrical Si NW addressed between Au electrodes by dielectrophoretic (DEP) alignment method are controlled to 106, 128, and 148 nm. After imprinting and transfer process, cylindrical Si NW is embedded in PVP adhesive and dielectric layer. By curing transferred cylindrical Si NW and Au electrodes on PVP-coated p++ Si substrate with 200nm-thick SiO2, 3μm gap Si NW FET fabrication was completed. As the diameter of embedded Si NW increases, the mobility of FET increases from 80.51 to 121.24 cm2/V•s and the threshold voltage moves from –7.17 to –2.44 V because the ratio of surface to volume gets reduced.

Keywords: Al2O3 sacrificial transfer layer, cylindrical silicon nanowires, dielectrophorestic alignment, field effect transistor

Procedia PDF Downloads 445
5987 Perfectionism, Self-Compassion, and Emotion Dysregulation: An Exploratory Analysis of Mediation Models in an Eating Disorder Sample

Authors: Sarah Potter, Michele Laliberte

Abstract:

As eating disorders are associated with high levels of chronicity, impairment, and distress, it is paramount to evaluate factors that may improve treatment outcomes in this group. Individuals with eating disorders exhibit elevated levels of perfectionism and emotion dysregulation, as well as reduced self-compassion. These variables are related to eating disorder outcomes, including shape/weight concerns and psychosocial impairment. Thus, these factors may be tenable targets for treatment within eating disorder populations. However, the relative contributions of perfectionism, emotion dysregulation, and self-compassion to the severity of shape/weight concerns and psychosocial impairment remain largely unexplored. In the current study, mediation analyses were conducted to clarify how perfectionism, emotion dysregulation, and self-compassion are linked to shape/weight concerns and psychosocial impairment. The sample was comprised of 85 patients from an outpatient eating disorder clinic. The patients completed self-report measures of perfectionism, self-compassion, emotion dysregulation, eating disorder symptoms, and psychosocial impairment. Specifically, emotion dysregulation was assessed as a mediator in the relationships between (1) perfectionism and shape/weight concerns, (2) self-compassion and shape/weight concerns, (3) perfectionism and psychosocial impairment, and (4) self-compassion and psychosocial impairment. It was postulated that emotion dysregulation would significantly mediate relationships in the former two models. An a priori hypothesis was not constructed in reference to the latter models, as these analyses were preliminary and exploratory in nature. The PROCESS macro for SPSS was utilized to perform these analyses. Emotion dysregulation fully mediated the relationships between perfectionism and eating disorder outcomes. In the link between self-compassion and psychosocial impairment, emotion dysregulation partially mediated this relationship. Finally, emotion dysregulation did not significantly mediate the relationship between self-compassion and shape/weight concerns. The results suggest that emotion dysregulation and self-compassion may be suitable targets to decrease the severity of psychosocial impairment and shape/weight concerns in individuals with eating disorders. Further research is required to determine the stability of these models over time, between diagnostic groups, and in nonclinical samples.

Keywords: eating disorders, emotion dysregulation, perfectionism, self-compassion

Procedia PDF Downloads 130
5986 The Market Structure Simulation of Heterogenous Firms

Authors: Arunas Burinskas, Manuela Tvaronavičienė

Abstract:

Although the new trade theories, unlike the theories of an industrial organisation, see the structure of the market and competition between enterprises through their heterogeneity according to various parameters, they do not pay any particular attention to the analysis of the market structure and its development. In this article, although we relied mainly on models developed by the scholars of new trade theory, we proposed a different approach. In our simulation model, we model market demand according to normal distribution function, while on the supply side (as it is in the new trade theory models), productivity is modeled with the Pareto distribution function. The results of the simulation show that companies with higher productivity (lower marginal costs) do not pass on all the benefits of such economies to buyers. However, even with higher marginal costs, firms can choose to offer higher value-added goods to stay in the market. In general, the structure of the market is formed quickly enough and depends on the skills available to firms.

Keywords: market, structure, simulation, heterogenous firms

Procedia PDF Downloads 133
5985 Thermodynamic Modelling of Liquid-Liquid Equilibria (LLE) in the Separation of p-Cresol from the Coal Tar by Solvent Extraction

Authors: D. S. Fardhyanti, Megawati, W. B. Sediawan

Abstract:

Coal tar is a liquid by-product of the process of coal gasification and carbonation. This liquid oil mixture contains various kinds of useful compounds such as aromatic compounds and phenolic compounds. These compounds are widely used as raw material for insecticides, dyes, medicines, perfumes, coloring matters, and many others. This research investigates thermodynamic modelling of liquid-liquid equilibria (LLE) in the separation of phenol from the coal tar by solvent extraction. The equilibria are modeled by ternary components of Wohl, Van Laar, and Three-Suffix Margules models. The values of the parameters involved are obtained by curve-fitting to the experimental data. Based on the comparison between calculated and experimental data, it turns out that among the three models studied, the Three-Suffix Margules seems to be the best to predict the LLE of p-Cresol mixtures for those system.

Keywords: coal tar, phenol, Wohl, Van Laar, Three-Suffix Margules

Procedia PDF Downloads 245
5984 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 94
5983 Monitoring of Spectrum Usage and Signal Identification Using Cognitive Radio

Authors: O. S. Omorogiuwa, E. J. Omozusi

Abstract:

The monitoring of spectrum usage and signal identification, using cognitive radio, is done to identify frequencies that are vacant for reuse. It has been established that ‘internet of things’ device uses secondary frequency which is free, thereby facing the challenge of interference from other users, where some primary frequencies are not being utilised. The design was done by analysing a specific frequency spectrum, checking if all the frequency stations that range from 87.5-108 MHz are presently being used in Benin City, Edo State, Nigeria. From the results, it was noticed that by using Software Defined Radio/Simulink, we were able to identify vacant frequencies in the range of frequency under consideration. Also, we were able to use the significance of energy detection threshold to reuse this vacant frequency spectrum, when the cognitive radio displays a zero output (that is decision H0), meaning that the channel is unoccupied. Hence, the analysis was able to find the spectrum hole and identify how it can be reused.

Keywords: spectrum, interference, telecommunication, cognitive radio, frequency

Procedia PDF Downloads 215
5982 Male Oreochromis mossambica as Indicator for Water Pollution with Trace Elements in Relation to Condition Factor from Pakistan

Authors: Muhammad Naeem, Syed M. Moeen-ud-Din Raheel, Muhammad Arshad, Muhammad Naeem Qaisar, Muhammad Khalid, Muhammad Zubair Ahmed, Muhammad Ashraf

Abstract:

Iron, Copper, Cadmium, Zinc, Manganese, Chromium levels were estimated to study the risk of trace elements on human consumption. The area of collection was Dera Ghazi Khan, Pakistan and was evaluated by means of flame atomic absorption spectrophotometer. The standards find in favor of the six heavy metals were in accordance with the threshold edge concentrations on behalf of fish meat obligatory by European and other international normative. Regressions were achieved for both size (length and weight) and condition factor with concentrations of metal present in the fish body.

Keywords: Oreochromis mossambica, toxic analysis, body size, condition factor

Procedia PDF Downloads 570
5981 Presenting a Knowledge Mapping Model According to a Comparative Study on Applied Models and Approaches to Map Organizational Knowledge

Authors: Ahmad Aslizadeh, Farid Ghaderi

Abstract:

Mapping organizational knowledge is an innovative concept and useful instrument of representation, capturing and visualization of implicit and explicit knowledge. There are a diversity of methods, instruments and techniques presented by different researchers following mapping organizational knowledge to reach determined goals. Implicating of these methods, it is necessary to know their exigencies and conditions in which those can be used. Integrating identified methods of knowledge mapping and comparing them would help knowledge managers to select the appropriate methods. This research conducted to presenting a model and framework to map organizational knowledge. At first, knowledge maps, their applications and necessity are introduced because of extracting comparative framework and detection of their structure. At the next step techniques of researchers such as Eppler, Kim, Egbu, Tandukar and Ebner as knowledge mapping models are presented and surveyed. Finally, they compare and a superior model would be introduced.

Keywords: knowledge mapping, knowledge management, comparative study, business and management

Procedia PDF Downloads 388
5980 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 161
5979 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model

Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf

Abstract:

Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.

Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV

Procedia PDF Downloads 109