Search results for: perceptual linear prediction (PLP’s)
4007 Perceiving Casual Speech: A Gating Experiment with French Listeners of L2 English
Authors: Naouel Zoghlami
Abstract:
Spoken-word recognition involves the simultaneous activation of potential word candidates which compete with each other for final correct recognition. In continuous speech, the activation-competition process gets more complicated due to speech reductions existing at word boundaries. Lexical processing is more difficult in L2 than in L1 because L2 listeners often lack phonetic, lexico-semantic, syntactic, and prosodic knowledge in the target language. In this study, we investigate the on-line lexical segmentation hypotheses that French listeners of L2 English form and then revise as subsequent perceptual evidence is revealed. Our purpose is to shed further light on the processes of L2 spoken-word recognition in context and better understand L2 listening difficulties through a comparison of skilled and unskilled reactions at the point where their working hypothesis is rejected. We use a variant of the gating experiment in which subjects transcribe an English sentence presented in increments of progressively greater duration. The spoken sentence was “And this amazing athlete has just broken another world record”, chosen mainly because it included common reductions and phonetic features in English, such as elision and assimilation. Our preliminary results show that there is an important difference in the manner in which proficient and less-proficient L2 listeners handle connected speech. Less-proficient listeners delay recognition of words as they wait for lexical and syntactic evidence to appear in the gates. Further statistical results are currently being undertaken.Keywords: gating paradigm, spoken word recognition, online lexical segmentation, L2 listening
Procedia PDF Downloads 4644006 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri
Authors: Shishay Kidanu, Abdullah Alhaj
Abstract:
Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri
Procedia PDF Downloads 744005 Analytic Solutions of Solitary Waves in Three-Level Unbalanced Dense Media
Authors: Sofiane Grira, Hichem Eleuch
Abstract:
We explore the analytical soliton-pair solutions for unbalanced coupling between the two coherent lights and the atomic transitions in a dissipative three-level system in lambda configuration. The two allowed atomic transitions are interacting resonantly with two laser fields. For unbalanced coupling, it is possible to derive an explicit solution for non-linear differential equations describing the soliton-pair propagation in this three-level system with the same velocity. We suppose that the spontaneous emission rates from the excited state to both ground states are the same. In this work, we focus on such case where we consider the coupling between the transitions and the optical fields are unbalanced. The existence conditions for the soliton-pair propagations are determined. We will show that there are four possible configurations of the soliton-pair pulses. Two of them can be interpreted as a couple of solitons with same directions of polarization and the other two as soliton-pair with opposite directions of polarization. Due to the fact that solitons have stable shapes while propagating in the considered media, they are insensitive to noise and dispersion. Our results have potential applications in data transfer with the soliton-pair pulses, where a dissipative three-level medium could be a realistic model for the optical communication media.Keywords: non-linear differential equations, solitons, wave propagations, optical fiber
Procedia PDF Downloads 1364004 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance
Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian
Abstract:
Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.Keywords: identification, Hammerstein-Wiener, optimization, quantization
Procedia PDF Downloads 2574003 Structural Equation Modeling Semiparametric Truncated Spline Using Simulation Data
Authors: Adji Achmad Rinaldo Fernandes
Abstract:
SEM analysis is a complex multivariate analysis because it involves a number of exogenous and endogenous variables that are interconnected to form a model. The measurement model is divided into two, namely, the reflective model (reflecting) and the formative model (forming). Before carrying out further tests on SEM, there are assumptions that must be met, namely the linearity assumption, to determine the form of the relationship. There are three modeling approaches to path analysis, including parametric, nonparametric and semiparametric approaches. The aim of this research is to develop semiparametric SEM and obtain the best model. The data used in the research is secondary data as the basis for the process of obtaining simulation data. Simulation data was generated with various sample sizes of 100, 300, and 500. In the semiparametric SEM analysis, the form of the relationship studied was determined, namely linear and quadratic and determined one and two knot points with various levels of error variance (EV=0.5; 1; 5). There are three levels of closeness of relationship for the analysis process in the measurement model consisting of low (0.1-0.3), medium (0.4-0.6) and high (0.7-0.9) levels of closeness. The best model lies in the form of the relationship X1Y1 linear, and. In the measurement model, a characteristic of the reflective model is obtained, namely that the higher the closeness of the relationship, the better the model obtained. The originality of this research is the development of semiparametric SEM, which has not been widely studied by researchers.Keywords: semiparametric SEM, measurement model, structural model, reflective model, formative model
Procedia PDF Downloads 404002 Hydrodynamics Study on Planing Hull with and without Step Using Numerical Solution
Authors: Koe Han Beng, Khoo Boo Cheong
Abstract:
The rising interest of stepped hull design has been led by the demand of more efficient high-speed boat. At the same time, the need of accurate prediction method for stepped planing hull is getting more important. By understanding the flow at high Froude number is the key in designing a practical step hull, the study surrounding stepped hull has been done mainly in the towing tank which is time-consuming and costly for initial design phase. Here the feasibility of predicting hydrodynamics of high-speed planing hull both with and without step using computational fluid dynamics (CFD) with the volume of fluid (VOF) methodology is studied in this work. First the flow around the prismatic body is analyzed, the force generated and its center of pressure are compared with available experimental and empirical data from the literature. The wake behind the transom on the keel line as well as the quarter beam buttock line are then compared with the available data, this is important since the afterbody flow of stepped hull is subjected from the wake of the forebody. Finally the calm water performance prediction of a conventional planing hull and its stepped version is then analyzed. Overset mesh methodology is employed in solving the dynamic equilibrium of the hull. The resistance, trim, and heave are then compared with the experimental data. The resistance is found to be predicted well and the dynamic equilibrium solved by the numerical method is deemed to be acceptable. This means that computational fluid dynamics will be very useful in further study on the complex flow around stepped hull and its potential usage in the design phase.Keywords: planing hulls, stepped hulls, wake shape, numerical simulation, hydrodynamics
Procedia PDF Downloads 2824001 Sensitivity Analysis of Movable Bed Roughness Formula in Sandy Rivers
Authors: Mehdi Fuladipanah
Abstract:
Sensitivity analysis as a technique is applied to determine influential input factors on model output. Variance-based sensitivity analysis method has more application compared to other methods because of including linear and non-linear models. In this paper, van Rijn’s movable bed roughness formula was selected to evaluate because of its reasonable results in sandy rivers. This equation contains four variables as: flow depth, sediment size,bBed form height and bed form length. These variable’s importance was determined using the first order of Fourier Amplitude Sensitivity Test. Sensitivity index was applied to evaluate importance of factors. The first order FAST based sensitivity indices test, explain 90% of the total variance that is indicating acceptance criteria of FAST application. More value of this index is indicating more important variable. Results show that bed form height, bed form length, sediment size and flow depth are more influential factors with sensitivity index: 32%, 24%, 19% and 15% respectively.Keywords: sdensitivity analysis, variance, movable bed roughness formula, Sandy River
Procedia PDF Downloads 2614000 Flow over an Exponentially Stretching Sheet with Hall and Cross-Diffusion Effects
Authors: Srinivasacharya Darbhasayanam, Jagadeeshwar Pashikanti
Abstract:
This paper analyzes the Soret and Dufour effects on mixed convection flow, heat and mass transfer from an exponentially stretching surface in a viscous fluid with Hall Effect. The governing partial differential equations are transformed into ordinary differential equations using similarity transformations. The nonlinear coupled ordinary differential equations are reduced to a system of linear differential equations using the successive linearization method and then solved the resulting linear system using the Chebyshev pseudo spectral method. The numerical results for the velocity components, temperature and concentration are presented graphically. The obtained results are compared with the previously published results, and are found to be in excellent agreement. It is observed from the present analysis that the primary and secondary velocities and concentration are found to be increasing, and temperature is decreasing with the increase in the values of the Soret parameter. An increase in the Dufour parameter increases both the primary and secondary velocities and temperature and decreases the concentration.Keywords: Exponentially stretching sheet, Hall current, Heat and Mass transfer, Soret and Dufour Effects
Procedia PDF Downloads 2143999 Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast
Authors: Muhammad Luthfi, Sutikno Sutikno, Purhadi Purhadi
Abstract:
Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport.Keywords: Bayesian Model Averaging, ensemble forecast, geostatistical output perturbation, numerical weather prediction, temperature
Procedia PDF Downloads 2803998 Sibling Relationship of Adults with Intellectual Disability in China
Authors: Luyin Liang
Abstract:
Although sibling relationship has been viewed as one of the most important family relationships that significantly impacted on the quality of life of both adults with Intellectual Disability (AWID) and their brothers/sisters, very few research have been done to investigate this relationship in China. This study investigated Chinese siblings of AWID’s relational motivations in sibling relationship and their determining factors. Quantitative research method has been adopted and 284 samples were recruited in this study. Siblings of AWID’s two types of relational motivations, including obligatory motivations and discretionary motivations were examined. Their emotional closeness, senses of responsibility, experiences of ID stigma, and expectancy of self-reward in sibling relationship were measured by validated scales. Personal, and familial-social demographic characteristics were also investigated. Linear correlation test and standard multiple regression analysis were the major statistical methods that have been used to analyze the data. The findings of this study showed that all the measured factors, including siblings of AWID’s emotional closeness, their senses of responsibility, experiences of ID stigma, and self-reward expectations had significant relationships with their both types of motivations. However, when these factors were grouped together to measure each type of these motivations, the prediction results were varied. The order of factors that best predict siblings of AWID’s obligatory motivations was: their senses of responsibility, emotional closeness, experiences of ID stigma, and their expectancy of self-reward, whereas the order of these factors that best determine siblings of AWID’s discretionary motivations was: their self-reward expectations, experiences of ID stigma, senses of responsibility, and emotional closeness. Among different demographic characteristics, AWID’s disability condition, their siblings’ age, gender, marital status, number of children, both siblings’ living arrangements and family financial status were found to have significant impacts on siblings of AWID’s both types of motivations in sibling relationship. The results of this study could enhance social work practitioners’ understandings about the needs and challenges of siblings of AWID. Suggestions on advocacies for policy changes and services improvements for these siblings were discussed in this study.Keywords: sibling relationship, intellectual disability, adults, China
Procedia PDF Downloads 4093997 Evaluation of Response Modification Factor and Behavior of Seismic Base-Isolated RC Structures
Authors: Mohammad Parsaeimaram, Fang Congqi
Abstract:
In this paper, one of the significant seismic design parameter as response modification factor in reinforced concrete (RC) buildings with base isolation system was evaluated. The seismic isolation system is a capable approach to absorbing seismic energy at the base and transfer to the substructure with lower response modification factor as compared to non-isolated structures. A response spectrum method and static nonlinear pushover analysis in according to Uniform Building Code (UBC-97), have been performed on building models involve 5, 8, 12 and 15 stories building with fixed and isolated bases consist of identical moment resisting configurations. The isolation system is composed of lead rubber bearing (LRB) was designed with help UBC-97 parameters. The force-deformation behavior of isolators was modeled as bi-linear hysteretic behavior which can be effectively used to create the isolation systems. The obtained analytical results highlight the response modification factor of considered base isolation system with higher values than recommended in the codes. The response modification factor is used in modern seismic codes to scale down the elastic response of structures.Keywords: response modification factor, base isolation system, pushover analysis, lead rubber bearing, bi-linear hysteretic
Procedia PDF Downloads 3243996 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database
Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani
Abstract:
The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.Keywords: residual analysis, GMPE, western balkan, strong motion, openquake
Procedia PDF Downloads 883995 Growth Curves Genetic Analysis of Native South Caspian Sea Poultry Using Bayesian Statistics
Authors: Jamal Fayazi, Farhad Anoosheh, Mohammad R. Ghorbani, Ali R. Paydar
Abstract:
In this study, to determine the best non-linear regression model describing the growth curve of native poultry, 9657 chicks of generations 18, 19, and 20 raised in Mazandaran breeding center were used. Fowls and roosters of this center distributed in south of Caspian Sea region. To estimate the genetic variability of none linear regression parameter of growth traits, a Gibbs sampling of Bayesian analysis was used. The average body weight traits in the first day (BW1), eighth week (BW8) and twelfth week (BW12) were respectively estimated as 36.05, 763.03, and 1194.98 grams. Based on the coefficient of determination, mean squares of error and Akaike information criteria, Gompertz model was selected as the best growth descriptive function. In Gompertz model, the estimated values for the parameters of maturity weight (A), integration constant (B) and maturity rate (K) were estimated to be 1734.4, 3.986, and 0.282, respectively. The direct heritability of BW1, BW8 and BW12 were respectively reported to be as 0.378, 0.3709, 0.316, 0.389, 0.43, 0.09 and 0.07. With regard to estimated parameters, the results of this study indicated that there is a possibility to improve some property of growth curve using appropriate selection programs.Keywords: direct heritability, Gompertz, growth traits, maturity weight, native poultry
Procedia PDF Downloads 2643994 A Small Graphic Lie. The Photographic Quality of Pierre Bourdieu’s Correspondance Analysis
Authors: Lene Granzau Juel-Jacobsen
Abstract:
The problem of beautification is an obvious concern of photography, claiming reference to reality, but it also lies at the very heart of social theory. As we become accustomed to sophisticated visualizations of statistical data in pace with the development of software programs, we should not only be inclined to ask new types of research questions, but we also need to confront social theories based on such visualization techniques with new types of questions. Correspondence Analysis, GIS analysis, Social Network Analysis, and Perceptual Maps are current examples of visualization techniques popular within the social sciences and neighboring disciplines. This article discusses correspondence analysis, arguing that the graphic plot of correspondence analysis is to be interpreted much similarly to a photograph. It refers no more evidently or univocally to reality than a photograph, representing social life no more truthfully than a photograph documents. Pierre Bourdieu’s theoretical corpus, especially his theory of fields, relies heavily on correspondence analysis. While much attention has been directed towards critiquing the somewhat vague conceptualization of habitus, limited focus has been placed on the equally problematic concepts of social space and field. Based on a re-reading of the Distinction, the article argues that the concepts rely on ‘a small graphic lie’ very similar to a photograph. Like any other piece of art, as Bourdieu himself recognized, the graphic display is a politically and morally loaded representation technique. However, the correspondence analysis does not necessarily serve the purpose he intended. In fact, it tends towards the pitfalls he strove to overcome.Keywords: datavisualization, correspondance analysis, bourdieu, Field, visual representation
Procedia PDF Downloads 683993 Single Imputation for Audiograms
Authors: Sarah Beaver, Renee Bryce
Abstract:
Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.Keywords: machine learning, audiograms, data imputations, single imputations
Procedia PDF Downloads 823992 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis
Authors: Liyun Chang, Cheng-Hsiang Tsai
Abstract:
To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.Keywords: CBCT, image quality, quality assurance, OBI
Procedia PDF Downloads 2983991 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory
Authors: Kiana Zeighami, Morteza Ozlati Moghadam
Abstract:
Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping
Procedia PDF Downloads 2083990 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia PDF Downloads 1343989 Construction of QSAR Models to Predict Potency on a Series of substituted Imidazole Derivatives as Anti-fungal Agents
Authors: Sara El Mansouria Beghdadi
Abstract:
Quantitative structure–activity relationship (QSAR) modelling is one of the main computer tools used in medicinal chemistry. Over the past two decades, the incidence of fungal infections has increased due to the development of resistance. In this study, the QSAR was performed on a series of esters of 2-carboxamido-3-(1H-imidazole-1-yl) propanoic acid derivatives. These compounds have showed moderate and very good antifungal activity. The multiple linear regression (MLR) was used to generate the linear 2d-QSAR models. The dataset consists of 115 compounds with their antifungal activity (log MIC) against «Candida albicans» (ATCC SC5314). Descriptors were calculated, and different models were generated using Chemoffice, Avogadro, GaussView software. The selected model was validated. The study suggests that the increase in lipophilicity and the reduction in the electronic character of the substituent in R1, as well as the reduction in the steric hindrance of the substituent in R2 and its aromatic character, supporting the potentiation of the antifungal effect. The results of QSAR could help scientists to propose new compounds with higher antifungal activities intended for immunocompromised patients susceptible to multi-resistant nosocomial infections.Keywords: quantitative structure–activity relationship, imidazole, antifungal, candida albicans (ATCC SC5314)
Procedia PDF Downloads 843988 A Mathematical Model Approach Regarding the Children’s Height Development with Fractional Calculus
Authors: Nisa Özge Önal, Kamil Karaçuha, Göksu Hazar Erdinç, Banu Bahar Karaçuha, Ertuğrul Karaçuha
Abstract:
The study aims to use a mathematical approach with the fractional calculus which is developed to have the ability to continuously analyze the factors related to the children’s height development. Until now, tracking the development of the child is getting more important and meaningful. Knowing and determining the factors related to the physical development of the child any desired time would provide better, reliable and accurate results for childcare. In this frame, 7 groups for height percentile curve (3th, 10th, 25th, 50th, 75th, 90th, and 97th) of Turkey are used. By using discrete height data of 0-18 years old children and the least squares method, a continuous curve is developed valid for any time interval. By doing so, in any desired instant, it is possible to find the percentage and location of the child in Percentage Chart. Here, with the help of the fractional calculus theory, a mathematical model is developed. The outcomes of the proposed approach are quite promising compared to the linear and the polynomial method. The approach also yields to predict the expected values of children in the sense of height.Keywords: children growth percentile, children physical development, fractional calculus, linear and polynomial model
Procedia PDF Downloads 1483987 High Order Block Implicit Multi-Step (Hobim) Methods for the Solution of Stiff Ordinary Differential Equations
Authors: J. P. Chollom, G. M. Kumleng, S. Longwap
Abstract:
The search for higher order A-stable linear multi-step methods has been the interest of many numerical analysts and has been realized through either higher derivatives of the solution or by inserting additional off step points, supper future points and the likes. These methods are suitable for the solution of stiff differential equations which exhibit characteristics that place a severe restriction on the choice of step size. It becomes necessary that only methods with large regions of absolute stability remain suitable for such equations. In this paper, high order block implicit multi-step methods of the hybrid form up to order twelve have been constructed using the multi-step collocation approach by inserting one or more off step points in the multi-step method. The accuracy and stability properties of the new methods are investigated and are shown to yield A-stable methods, a property desirable of methods suitable for the solution of stiff ODE’s. The new High Order Block Implicit Multistep methods used as block integrators are tested on stiff differential systems and the results reveal that the new methods are efficient and compete favourably with the state of the art Matlab ode23 code.Keywords: block linear multistep methods, high order, implicit, stiff differential equations
Procedia PDF Downloads 3583986 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 3863985 Deficits in Perceptual and Musical Memory in Individuals with Major Depressive Disorder
Authors: Toledo-Fernandez Aldebaran
Abstract:
Introduction: One of the least explored cognitive functions in relation with depression is the one related to musical stimuli. Music perception and memory can become impaired as well. The term amusia is used to define a type of agnosia caused by damage to basic processes that creates a general inability to perceive music. Therefore, the main objective is to explore performance-based and self-report deficits in music perception and memory on people with major depressive disorder (MDD). Method: Data was collected through April-October 2021 recruiting people who met the eligibility criteria and using the Montreal Battery of Evaluation of Amusia (MBEA) to evaluate performance-based music perception and memory, along with the module for depression of the Mini International Neuropsychiatric Interview, and the Amusic Dysfunction Inventory (ADI) which evaluates the participants’ self-report concerning their abilities in music perception. Results: 64 participants were evaluated. The main study, referring to analyzing the differences between people with MDD and the control group, only showed one statistical difference on the Interval subtest of the MBEA. No difference was found in the dimensions assessed by the ADI. Conclusion: Deficits in interval perception can be explained by mental fatigue, to which people with depression are more vulnerable, rather than by specific deficits in musical perception and memory associated with depressive disorder. Additionally, significant associations were found between musical deficits as observed by performance-based evidence and music dysfunction according to self-report, which could suggest that some people with depression are capable of detecting these deficits in themselves.Keywords: depression, amusia, music, perception, memory
Procedia PDF Downloads 643984 Two Efficient Heuristic Algorithms for the Integrated Production Planning and Warehouse Layout Problem
Authors: Mohammad Pourmohammadi Fallah, Maziar Salahi
Abstract:
In the literature, a mixed-integer linear programming model for the integrated production planning and warehouse layout problem is proposed. To solve the model, the authors proposed a Lagrangian relax-and-fix heuristic that takes a significant amount of time to stop with gaps above 5$\%$ for large-scale instances. Here, we present two heuristic algorithms to solve the problem. In the first one, we use a greedy approach by allocating warehouse locations with less reservation costs and also less transportation costs from the production area to locations and from locations to the output point to items with higher demands. Then a smaller model is solved. In the second heuristic, first, we sort items in descending order according to the fraction of the sum of the demands for that item in the time horizon plus the maximum demand for that item in the time horizon and the sum of all its demands in the time horizon. Then we categorize the sorted items into groups of 3, 4, or 5 and solve a small-scale optimization problem for each group, hoping to improve the solution of the first heuristic. Our preliminary numerical results show the effectiveness of the proposed heuristics.Keywords: capacitated lot-sizing, warehouse layout, mixed-integer linear programming, heuristics algorithm
Procedia PDF Downloads 1963983 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 1513982 A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets
Authors: Cristian Pauna
Abstract:
Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, neural network
Procedia PDF Downloads 1603981 Increasing the Frequency of Laser Impulses with Optical Choppers with Rotational Shafts
Authors: Virgil-Florin Duma, Dorin Demian
Abstract:
Optical choppers are among the most common optomechatronic devices, utilized in numerous applications, from radiometry to telescopes and biomedical imaging. The classical configuration has a rotational disk with windows with linear margins. This research points out the laser signals that can be obtained with these classical choppers, as well as with another, novel, patented configuration, of eclipse choppers (i.e., with rotational disks with windows with non-linear margins, oriented outwards or inwards). Approximately triangular laser signals can be obtained with eclipse choppers, in contrast to the approximately sinusoidal – with classical devices. The main topic of this work refers to another, novel device, of choppers with shafts of different shapes and with slits of various profiles (patent pending). A significant improvement which can be obtained (with regard to disk choppers) refers to the chop frequencies of the laser signals. Thus, while 1 kHz is their typical limit for disk choppers, with choppers with shafts, a more than 20 times increase in the chop frequency can be obtained with choppers with shafts. Their transmission functions are also discussed, for different types of laser beams. Acknowledgments: This research is supported by the Romanian National Authority for Scientific Research, through the project PN-III-P2-2.1-BG-2016-0297.Keywords: laser signals, laser systems, optical choppers, optomechatronics, transfer functions, eclipse choppers, choppers with shafts
Procedia PDF Downloads 1913980 Results of Longitudinal Assessments of Very Low Birth Weight and Extremely Low Birth Weight Infants
Authors: Anett Nagy, Anna Maria Beke, Rozsa Graf, Magda Kalmar
Abstract:
Premature birth involves developmental risks – the earlier the baby is born and the lower its birth weight, the higher the risks. The developmental outcomes for immature, low birth weight infants are hard to predict. Our aim is to identify the factors influencing infant and preschool-age development in very low birth weight (VLBW) and extremely low birth weight (ELBW) preterms. Sixty-one subjects participated in our longitudinal study, which consisted of thirty VLBW and thirty-one ELBW children. The psychomotor development of the infants was assessed using the Brunet-Lezine Developmental Scale at the corrected ages of one and two years; then at three years of age, they were tested with the WPPSI-IV IQ test. Birth weight, gestational age, perinatal complications, gender, and maternal education, were added to the data analysis as independent variables. According to our assessments, our subjects as a group scored in the average range in each subscale of the Brunet-Lezine Developmental Scale. The scores were the lowest in language at both measurement points. The children’s performances improved between one and two years of age, particularly in the domain of coordination. At three years of age the mean IQ test results, although still in the average range, were near the low end of it in each index. The ELBW preterms performed significantly poorer in Perceptual Reasoning Index. The developmental level at two years better predicted the IQ than that at one year. None of the measures distinguished the genders.Keywords: preterm, extremely low birth-weight, perinatal complication, psychomotor development, intelligence, follow-up
Procedia PDF Downloads 2443979 Analysis of Vocal Fold Vibrations from High-Speed Digital Images Based on Dynamic Time Warping
Authors: A. I. A. Rahman, Sh-Hussain Salleh, K. Ahmad, K. Anuar
Abstract:
Analysis of vocal fold vibration is essential for understanding the mechanism of voice production and for improving clinical assessment of voice disorders. This paper presents a Dynamic Time Warping (DTW) based approach to analyze and objectively classify vocal fold vibration patterns. The proposed technique was designed and implemented on a Glottal Area Waveform (GAW) extracted from high-speed laryngeal images by delineating the glottal edges for each image frame. Feature extraction from the GAW was performed using Linear Predictive Coding (LPC). Several types of voice reference templates from simulations of clear, breathy, fry, pressed and hyperfunctional voice productions were used. The patterns of the reference templates were first verified using the analytical signal generated through Hilbert transformation of the GAW. Samples from normal speakers’ voice recordings were then used to evaluate and test the effectiveness of this approach. The classification of the voice patterns using the technique of LPC and DTW gave the accuracy of 81%.Keywords: dynamic time warping, glottal area waveform, linear predictive coding, high-speed laryngeal images, Hilbert transform
Procedia PDF Downloads 2393978 Load Forecasting Using Neural Network Integrated with Economic Dispatch Problem
Authors: Mariyam Arif, Ye Liu, Israr Ul Haq, Ahsan Ashfaq
Abstract:
High cost of fossil fuels and intensifying installations of alternate energy generation sources are intimidating main challenges in power systems. Making accurate load forecasting an important and challenging task for optimal energy planning and management at both distribution and generation side. There are many techniques to forecast load but each technique comes with its own limitation and requires data to accurately predict the forecast load. Artificial Neural Network (ANN) is one such technique to efficiently forecast the load. Comparison between two different ranges of input datasets has been applied to dynamic ANN technique using MATLAB Neural Network Toolbox. It has been observed that selection of input data on training of a network has significant effects on forecasted results. Day-wise input data forecasted the load accurately as compared to year-wise input data. The forecasted load is then distributed among the six generators by using the linear programming to get the optimal point of generation. The algorithm is then verified by comparing the results of each generator with their respective generation limits.Keywords: artificial neural networks, demand-side management, economic dispatch, linear programming, power generation dispatch
Procedia PDF Downloads 189