Search results for: random deviation
2647 Evaluation of Spatial Correlation Length and Karhunen-Loeve Expansion Terms for Predicting Reliability Level of Long-Term Settlement in Soft Soils
Authors: Mehrnaz Alibeikloo, Hadi Khabbaz, Behzad Fatahi
Abstract:
The spectral random field method is one of the widely used methods to obtain more reliable and accurate results in geotechnical problems involving material variability. Karhunen-Loeve (K-L) expansion method was applied to perform random field discretization of cross-correlated creep parameters. Karhunen-Loeve expansion method is based on eigenfunctions and eigenvalues of covariance function adopting Kernel integral solution. In this paper, the accuracy of Karhunen-Loeve expansion was investigated to predict long-term settlement of soft soils adopting elastic visco-plastic creep model. For this purpose, a parametric study was carried to evaluate the effect of K-L expansion terms and spatial correlation length on the reliability of results. The results indicate that small values of spatial correlation length require more K-L expansion terms. Moreover, by increasing spatial correlation length, the coefficient of variation (COV) of creep settlement increases, confirming more conservative and safer prediction.Keywords: Karhunen-Loeve expansion, long-term settlement, reliability analysis, spatial correlation length
Procedia PDF Downloads 1592646 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky
Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio
Abstract:
This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.Keywords: contour orientation histogram, meteors, night sky, RSC neural classifier, stars
Procedia PDF Downloads 1382645 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing
Authors: Carolina Gouveia, José Vieira, Pedro Pinho
Abstract:
The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.Keywords: bio-signals, DC component, Doppler effect, ellipse fitting, radar, SDR
Procedia PDF Downloads 1402644 Spatial Heterogeneity of Urban Land Use in the Yangtze River Economic Belt Based on DMSP/OLS Data
Authors: Liang Zhou, Qinke Sun
Abstract:
Taking the Yangtze River Economic Belt as an example, using long-term nighttime lighting data from DMSP/OLS from 1992 to 2012, support vector machine classification (SVM) was used to quantitatively extract urban built-up areas of economic belts, and spatial analysis of expansion intensity index, standard deviation ellipse, etc. was introduced. The model conducts detailed and in-depth discussions on the strength, direction, and type of the expansion of the middle and lower reaches of the economic belt and the key node cities. The results show that: (1) From 1992 to 2012, the built-up areas of the major cities in the Yangtze River Valley showed a rapid expansion trend. The built-up area expanded by 60,392 km², and the average annual expansion rate was 31%, that is, from 9615 km² in 1992 to 70007 km² in 2012. The spatial gradient analysis of the watershed shows that the expansion of urban built-up areas in the middle and lower reaches of the river basin takes Shanghai as the leading force, and the 'bottom-up' model shows an expanding pattern of 'upstream-downstream-middle-range' declines. The average annual rate of expansion is 36% and 35%, respectively. 17% of which the midstream expansion rate is about 50% of the upstream and downstream. (2) The analysis of expansion intensity shows that the urban expansion intensity in the Yangtze River Basin has generally shown an upward trend, the downstream region has continued to rise, and the upper and middle reaches have experienced different amplitude fluctuations. To further analyze the strength of urban expansion at key nodes, Chengdu, Chongqing, and Wuhan in the upper and middle reaches maintain a high degree of consistency with the intensity of regional expansion. Node cities with Shanghai as the core downstream continue to maintain a high level of expansion. (3) The standard deviation ellipse analysis shows that the overall center of gravity of the Yangtze River basin city is located in Anqing City, Anhui Province, and it showed a phenomenon of reciprocating movement from 1992 to 2012. The nighttime standard deviation ellipse distribution range increased from 61.96 km² to 76.52 km². The growth of the major axis of the ellipse was significantly larger than that of the minor axis. It had obvious east-west axiality, in which the nighttime lights in the downstream area occupied in the entire luminosity scale urban system leading position.Keywords: urban space, support vector machine, spatial characteristics, night lights, Yangtze River Economic Belt
Procedia PDF Downloads 1142643 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 802642 Erectile Function and Heart Rate Variability in Men under 40 Years Old
Authors: Rui Miguel Costa, Jose Pestana, David Costa, Paula Mangia, Catarina Correia, Mafalda Pinto Coelho
Abstract:
There is lack of studies examining the relation of different heart rate variability (HRV) parameters with the risk of erectile dysfunction (ED) in younger men. Thus, the present study aimed at examining, in a nonclinical sample of men aged 19-39 years old (mean age = 23.98 years, SD = 4.90), the relations of risk of ED with the standard deviation of the heart rate (SD of HR), high and low frequency power of HRV, and low-to-high frequency HRV ratio. Eighty-three heterosexual Portuguese men completed the 5-item version of the International Index of Erectile Function (IIEF-5) and HRV parameters were calculated from a 5-minute resting period. Risk of ED was determined by IIEF-5 scores of 21 or less. Fifteen men (18.1%) reported symptoms of ED (14 with mild and one with mild to moderate symptoms). Univariate analyses of variance revealed that risk of ED was related to lesser SD of HR and lesser low-frequency power, the two HRV parameters that express a coupling of higher vagal and sympathetic tone. Risk of ED was unrelated to high-frequency power and low-to-high frequency HRV ratio. Further, in a logistic regression, the risk of ED was independently predicted by older age and lower SD of HR, but not by low-frequency power, having a regular sexual partner, and cohabiting. The results provide preliminary evidence that, in younger men, a coupling of higher vagal and sympathetic tone, as indexed by the SD of HR, is important for erections. Greater resting SD of HR might reflect better vascular and interpersonal function via vagal tone coupled with greater motor mobilization to pursue sexual intercourse via sympathetic tone. Many interventions can elevate HRV; future research is warranted on how they can be tailored to treat ED in younger men.Keywords: erectile dysfunction, heart rate variability, standard deviation of the heart rate, younger men
Procedia PDF Downloads 2762641 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison
Procedia PDF Downloads 1032640 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium
Authors: Janne Engblom, Elias Oikarinen
Abstract:
The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.Keywords: dynamic model, panel data, cross-sectional dependence, interaction model
Procedia PDF Downloads 2512639 Linguistic Competencies of Students with Hearing Impairment
Authors: Munawar Malik, Muntaha Ahmad, Khalil Ullah Khan
Abstract:
Linguistic abilities in students with hearing impairment yet remain a concern for educationists. The emerging technological support and provisions in recent era vows to have addressed the situation and claims significant contribution in terms of linguistic repertoire. Being a descriptive and quantitative paradigm of study, the purpose of this research set forth was to assess linguistic competencies of students with hearing impairment in English language. The goals were further broken down to identify level of reading abilities in the subject population. The population involved students with HI studying at higher secondary level in Lahore. Simple random sampling technique was used to choose a sample of fifty students. A purposive curriculum-based assessment was designed in line with accelerated learning program by Punjab Government, to assess Linguistic competence among the sample. Further to it, an Informal Reading Inventory (IRI) corresponding to reading levels was also developed by researchers duly validated and piloted before the final use. Descriptive and inferential statistics were utilized to reach to the findings. Spearman’s correlation was used to find out relationship between degree of hearing loss, grade level, gender and type of amplification device. Independent sample t-test was used to compare means among groups. Major findings of the study revealed that students with hearing impairment exhibit significant deviation from the mean scores when compared in terms of grades, severity and amplification device. The study divulged that respective students with HI have yet failed to qualify an independent level of reading according to their grades as majority falls at frustration level of word recognition and passage comprehension. The poorer performance can be attributed to lower linguistic competence as it shows in the frustration levels of reading, writing and comprehension. The correlation analysis did reflect an improved performance grade wise, however scores could only correspond to frustration level and independent levels was never achieved. Reported achievements at instructional level of subject population may further to linguistic skills if practiced purposively.Keywords: linguistic competence, hearing impairment, reading levels, educationist
Procedia PDF Downloads 672638 Parameter Estimation for Contact Tracing in Graph-Based Models
Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar
Abstract:
We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference
Procedia PDF Downloads 772637 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1222636 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception
Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom
Abstract:
Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots
Procedia PDF Downloads 1942635 Application of Multilayer Perceptron and Markov Chain Analysis Based Hybrid-Approach for Predicting and Monitoring the Pattern of LULC Using Random Forest Classification in Jhelum District, Punjab, Pakistan
Authors: Basit Aftab, Zhichao Wang, Feng Zhongke
Abstract:
Land Use and Land Cover Change (LULCC) is a critical environmental issue that has significant effects on biodiversity, ecosystem services, and climate change. This study examines the spatiotemporal dynamics of land use and land cover (LULC) across a three-decade period (1992–2022) in a district area. The goal is to support sustainable land management and urban planning by utilizing the combination of remote sensing, GIS data, and observations from Landsat satellites 5 and 8 to provide precise predictions of the trajectory of urban sprawl. In order to forecast the LULCC patterns, this study suggests a hybrid strategy that combines the Random Forest method with Multilayer Perceptron (MLP) and Markov Chain analysis. To predict the dynamics of LULC change for the year 2035, a hybrid technique based on multilayer Perceptron and Markov Chain Model Analysis (MLP-MCA) was employed. The area of developed land has increased significantly, while the amount of bare land, vegetation, and forest cover have all decreased. This is because the principal land types have changed due to population growth and economic expansion. The study also discovered that between 1998 and 2023, the built-up area increased by 468 km² as a result of the replacement of natural resources. It is estimated that 25.04% of the study area's urbanization will be increased by 2035. The performance of the model was confirmed with an overall accuracy of 90% and a kappa coefficient of around 0.89. It is important to use advanced predictive models to guide sustainable urban development strategies. It provides valuable insights for policymakers, land managers, and researchers to support sustainable land use planning, conservation efforts, and climate change mitigation strategies.Keywords: land use land cover, Markov chain model, multi-layer perceptron, random forest, sustainable land, remote sensing.
Procedia PDF Downloads 332634 Dynamics of Follicle Vascular Perfusion, Dimensions, Antrum Growth, Circulating Angiogenic Mediators from Deviation to Ovulation
Authors: Elshymaa A. Abdelnaby, Amal M. Abo El-Maaty
Abstract:
This study aimed to investigate dynamics of dominant and subordinate follicles change in dimensions, vascularity and angiogenic hormones after completing deviation till ovulation. Five cyclic mares were subjected to daily blood sampling and rectal Doppler ultrasonographic examination along two estrous cycles. Using electronic calipers, three diameters were recorded for each follicle to estimate area and volume. Leptin, Insulin-like growth factor-I (IGF-1), nitric oxide (NO) and estradiol (E2) were measured. Area of color- and power- Doppler modes with area and circumference of the first (preovulatory) and subordinate follicles were measured in pixels. Follicles were classified into F1O (preovulatory), F2O (subordinate), F3O (third ovulatory) on the dominant ovary and F1C (first contra) and F2C (second contra) on the contralateral ovary. Days before ovulation significantly (P < 0.0001) affected diameter, circumference, area, volume, area/pixel and antrum area of the preovulatory follicle. With the increase of diameter, area, volume area/pixel, antrum area/pixel and circumference of F1O, those of all subordinates were decreasing. The blue blood flow area, power and power minus red blood flow area of F1O increased from day -6 till day of ovulation (day 0), but red blood flow area significantly decreased. F1O had the lowest percent of colored pixels and percent of the colored pixels without antrum. Estradiol and leptin increased from day -6 till day 0 but IGF-1 decreased till day -1 but NO achieved a peak on day -3 then decreased till day 0. In conclusion, antrum growth, blood flow and angiogenic hormones play a role in maturation and ovulation of the dominant follicle in mares.Keywords: angiogenic hormones, blood flow, mare, preovulatory follicle
Procedia PDF Downloads 3132633 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 2182632 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models
Authors: Jay L. Fu
Abstract:
Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction
Procedia PDF Downloads 1432631 The Effects of Three Levels of Contextual Inference among adult Athletes
Authors: Abdulaziz Almustafa
Abstract:
Considering the critical role permanence has on predictions related to the contextual interference effect on laboratory and field research, this study sought to determine whether the paradigm of the effect depends on the complexity of the skill during the acquisition and transfer phases. The purpose of the present study was to investigate the effects of contextual interference CI by extending previous laboratory and field research with adult athletes through the acquisition and transfer phases. Male (n=60) athletes age 18-22 years-old, were chosen randomly from Eastern Province Clubs. They were assigned to complete blocked, random, or serial practices. Analysis of variance with repeated measures MANOVA indicated that, the results did not support the notion of CI. There were no significant differences in acquisition phase between blocked, serial and random practice groups. During the transfer phase, there were no major differences between the practice groups. Apparently, due to the task complexity, participants were probably confused and not able to use the advantages of contextual interference. This is another contradictory result to contextual interference effects in acquisition and transfer phases in sport settings. One major factor that can influence the effect of contextual interference is task characteristics as the nature of level of difficulty in sport-related skill.Keywords: contextual interference, acquisition, transfer, task difficulty
Procedia PDF Downloads 4662630 Enhanced Test Scheme based on Programmable Write Time for Future Computer Memories
Authors: Nor Zaidi Haron, Fauziyah Salehuddin, Norsuhaidah Arshad, Sani Irwan Salim
Abstract:
Resistive random access memories (RRAMs) are one of the main candidates for future computer memories. However, due to their tiny size and immature device technology, the quality of the outgoing RRAM chips is seen as a serious issue. Defective RRAM cells might behave differently than existing semiconductor memories (Dynamic RAM, Static RAM, and Flash), meaning that they are difficult to be detected using existing test schemes. This paper presents an enhanced test scheme, referred to as Programmable Short Write Time (PSWT) that is able to improve the detection of faulty RRAM cells. It is developed by applying multiple weak write operations, each with different time durations. The test circuit embedded in the RRAM chip is made programmable in order to supply different weak write times during testing. The RRAM electrical model is described using Verilog-AMS language and is simulated using HSPICE simulation tools. Simulation results show that the proposed test scheme offers better open-resistive fault detection compared to existing test schemes.Keywords: memory fault, memory test, design-for-testability, resistive random access memory
Procedia PDF Downloads 3872629 Impact of Sovereign Debt Risk and Corrective Austerity Measures on Private Sector Borrowing Cost in Euro Zone
Authors: Syed Noaman Shah
Abstract:
The current paper evaluates the effect of external public debt risk on the borrowing cost of private non-financial firms in euro zone. Further, the study also treats the impact of austerity measures on syndicated-loan spreads of private firm followed by euro area member states to revive the economic growth in the region. To test these hypotheses, we follow multivariate ordinary least square estimation method to assess the effect of external public debt on the borrowing cost of private firms. By using foreign syndicated-loan issuance data of non-financial private firms from 2005 to 2011, we attempt to gauge how the private financing cost varies with high levels of sovereign external debt prevalent in the euro zone. Our results suggest significant effect of external public debt on the borrowing cost of private firm. In particular, an increase in external public debt by one standard deviation from its sample mean raises syndicated-loan spread by 89 bps. Furthermore, weak creditor rights protection prevalent in member states deepens this effect. However, we do not find any significant effect of domestic public debt on the private sector borrowing cost. In addition, the results show significant effect of austerity measures on private financing cost, both in normal and in crisis period in the euro zone. In particular, one standard deviation change in fiscal consolidation conditional mean reduces the syndicated-loan spread by 22 bps. In turn, it indicates strong presence of credibility channel due to austerity measures in euro area region.Keywords: corporate debt, fiscal consolidation, sovereign debt, syndicated-loan spread
Procedia PDF Downloads 4122628 Conscious Intention-based Processes Impact the Neural Activities Prior to Voluntary Action on Reinforcement Learning Schedules
Authors: Xiaosheng Chen, Jingjing Chen, Phil Reed, Dan Zhang
Abstract:
Conscious intention can be a promising point cut to grasp consciousness and orient voluntary action. The current study adopted a random ratio (RR), yoked random interval (RI) reinforcement learning schedule instead of the previous highly repeatable and single decision point paradigms, aimed to induce voluntary action with the conscious intention that evolves from the interaction between short-range-intention and long-range-intention. Readiness potential (RP) -like-EEG amplitude and inter-trial-EEG variability decreased significantly prior to voluntary action compared to cued action for inter-trial-EEG variability, mainly featured during the earlier stage of neural activities. Notably, (RP) -like-EEG amplitudes decreased significantly prior to higher RI-reward rates responses in which participants formed a higher plane of conscious intention. The present study suggests the possible contribution of conscious intention-based processes to the neural activities from the earlier stage prior to voluntary action on reinforcement leanring schedule.Keywords: Reinforcement leaning schedule, voluntary action, EEG, conscious intention, readiness potential
Procedia PDF Downloads 782627 Stochastic Modeling and Productivity Analysis of a Flexible Manufacturing System
Authors: Mehmet Savsar, Majid Aldaihani
Abstract:
Flexible Manufacturing Systems (FMS) are used to produce a variety of parts on the same equipment. Therefore, their utilization is higher than traditional machining systems. Higher utilization, on the other hand, results in more frequent equipment failures and additional need for maintenance. Therefore, it is necessary to carefully analyze operational characteristics and productivity of FMS or Flexible Manufacturing Cells (FMC), which are smaller configuration of FMS, before installation or during their operation. Appropriate models should be developed to determine production rates based on operational conditions, including equipment reliability, availability, and repair capacity. In this paper, a stochastic model is developed for an automated FMC system, which consists of two machines served by two robots and a single repairman. The model is used to determine system productivity and equipment utilization under different operational conditions, including random machine failures, random repairs, and limited repair capacity. The results are compared to previous study results for FMC system with sufficient repair capacity assigned to each machine. The results show that the model will be useful for design engineers and operational managers to analyze performance of manufacturing systems at the design or operational stages.Keywords: flexible manufacturing, FMS, FMC, stochastic modeling, production rate, reliability, availability
Procedia PDF Downloads 5162626 Analysis and Design of Offshore Triceratops under Ultra-Deep Waters
Authors: Srinivasan Chandrasekaran, R. Nagavinothini
Abstract:
Offshore platforms for ultra-deep waters are form-dominant by design; hybrid systems with large flexibility in horizontal plane and high rigidity in vertical plane are preferred due to functional complexities. Offshore triceratops is relatively a new-generation offshore platform, whose deck is partially isolated from the supporting buoyant legs by ball joints. They allow transfer of partial displacements of buoyant legs to the deck but restrain transfer of rotational response. Buoyant legs are in turn taut-moored to the sea bed using pre-tension tethers. Present study will discuss detailed dynamic analysis and preliminary design of the chosen geometric, which is necessary as a proof of validation for such design applications. A detailed numeric analysis of triceratops at 2400 m water depth under random waves is presented. Preliminary design confirms member-level design requirements under various modes of failure. Tether configuration, proposed in the study confirms no pull-out of tethers as stress variation is comparatively lesser than the yield value. Presented study shall aid offshore engineers and contractors to understand suitability of triceratops, in terms of design and dynamic response behaviour.Keywords: offshore structures, triceratops, random waves, buoyant legs, preliminary design, dynamic analysis
Procedia PDF Downloads 2042625 Code Mixing and Code-Switching Patterns in Kannada-English Bilingual Children and Adults Who Stutter
Authors: Vasupradaa Manivannan, Santosh Maruthy
Abstract:
Background/Aims: Preliminary evidence suggests that code-switching and code-mixing may act as one of the voluntary coping behavior to avoid the stuttering characteristics in children and adults; however, less is known about the types and patterns of code-mixing (CM) and code-switching (CS). Further, it is not known how it is different between children to adults who stutter. This study aimed to identify and compare the CM and CS patterns between Kannada-English bilingual children and adults who stutter. Method: A standard group comparison was made between five children who stutter (CWS) in the age range of 9-13 years and five adults who stutter (AWS) in the age range of 20-25 years. The participants who are proficient in Kannada (first language- L1) and English (second language- L2) were considered for the study. There were two tasks given to both the groups, a) General conversation (GC) with 10 random questions, b) Narration task (NAR) (Story / General Topic, for example., A Memorable Life Event) in three different conditions {Mono Kannada (MK), Mono English (ME), and Bilingual (BIL) Condition}. The children and adults were assessed online (via Zoom session) with a high-quality internet connection. The audio and video samples of the full assessment session were auto-recorded and manually transcribed. The recorded samples were analyzed for the percentage of dysfluencies using SSI-4 and CM, and CS exhibited in each participant using Matrix Language Frame (MLF) model parameters. The obtained data were analyzed using the Statistical Package for the Social Sciences (SPSS) software package (Version 20.0). Results: The mean, median, and standard deviation values were obtained for the percentage of dysfluencies (%SS) and frequency of CM and CS in Kannada-English bilingual children and adults who stutter for various parameters obtained through the MLF model. The inferential results indicated that %SS significantly varied between population (AWS vs CWS), languages (L1 vs L2), and tasks (GC vs NAR) but not across free (BIL) and bound (MK, ME) conditions. It was also found that the frequency of CM and CS patterns varies between CWS and AWS. The AWS had a lesser %SS but greater use of CS patterns than CWS, which is due to their excessive coping skills. The language mixing patterns were more observed in L1 than L2, and it was significant in most of the MLF parameters. However, there was a significantly higher (P<0.05) %SS in L2 than L1. The CS and CS patterns were more in conditions 1 and 3 than 2, which may be due to the higher proficiency of L2 than L1. Conclusion: The findings highlight the importance of assessing the CM and CS behaviors, their patterns, and the frequency of CM and CS between CWS and AWS on MLF parameters in two different tasks across three conditions. The results help us to understand CM and CS strategies in bilingual persons who stutter.Keywords: bilinguals, code mixing, code switching, stuttering
Procedia PDF Downloads 782624 Batch and Fixed-Bed Studies of Ammonia Treated Coconut Shell Activated Carbon for Adsorption of Benzene and Toluene
Authors: Jibril Mohammed, Usman Dadum Hamza, Muhammad Idris Misau, Baba Yahya Danjuma, Yusuf Bode Raji, Abdulsalam Surajudeen
Abstract:
Volatile organic compounds (VOCs) have been reported to be responsible for many acute and chronic health effects and environmental degradations such as global warming. In this study, a renewable and low-cost coconut shell activated carbon (PHAC) was synthesized and treated with ammonia (PHAC-AM) to improve its hydrophobicity and affinity towards VOCs. Removal efficiencies and adsorption capacities of the ammonia treated activated carbon (PHAC-AM) for benzene and toluene were carried out through batch and fixed-bed studies respectively. Langmuir, Freundlich and Tempkin adsorption isotherms were tested for the adsorption process and the experimental data were best fitted by Langmuir model and least fitted by Tempkin model; the favourability and suitability of fitness were validated by equilibrium parameter (RL) and the root square mean deviation (RSMD). Judging by the deviation of the predicted values from the experimental values, pseudo-second-order kinetic model best described the adsorption kinetics than the pseudo-first-order kinetic model for the two VOCs on PHAC and PHAC-AM. In the fixed-bed study, the effect of initial VOC concentration, bed height and flow rate on benzene and toluene adsorption were studied. The highest bed capacities of 77.30 and 69.40 mg/g were recorded for benzene and toluene respectively; at 250 mg/l initial VOC concentration, 2.5 cm bed height and 4.5 ml/min flow rate. The results of this study revealed that ammonia treated activate carbon (PHAC-AM) is a sustainable adsorbent for treatment of VOCs in polluted waters.Keywords: volatile organic compounds, equilibrium and kinetics studies, batch and fixed bed study, bio-based activated carbon
Procedia PDF Downloads 2252623 Robust Recognition of Locomotion Patterns via Data-Driven Machine Learning in the Cloud Environment
Authors: Shinoy Vengaramkode Bhaskaran, Kaushik Sathupadi, Sandesh Achar
Abstract:
Human locomotion recognition is important in a variety of sectors, such as robotics, security, healthcare, fitness tracking and cloud computing. With the increasing pervasiveness of peripheral devices, particularly Inertial Measurement Units (IMUs) sensors, researchers have attempted to exploit these advancements in order to precisely and efficiently identify and categorize human activities. This research paper introduces a state-of-the-art methodology for the recognition of human locomotion patterns in a cloud environment. The methodology is based on a publicly available benchmark dataset. The investigation implements a denoising and windowing strategy to deal with the unprocessed data. Next, feature extraction is adopted to abstract the main cues from the data. The SelectKBest strategy is used to abstract optimal features from the data. Furthermore, state-of-the-art ML classifiers are used to evaluate the performance of the system, including logistic regression, random forest, gradient boosting and SVM have been investigated to accomplish precise locomotion classification. Finally, a detailed comparative analysis of results is presented to reveal the performance of recognition models.Keywords: artificial intelligence, cloud computing, IoT, human locomotion, gradient boosting, random forest, neural networks, body-worn sensors
Procedia PDF Downloads 112622 Modeling Binomial Dependent Distribution of the Values: Synthesis Tables of Probabilities of Errors of the First and Second Kind of Biometrics-Neural Network Authentication System
Authors: B. S.Akhmetov, S. T. Akhmetova, D. N. Nadeyev, V. Yu. Yegorov, V. V. Smogoonov
Abstract:
Estimated probabilities of errors of the first and second kind for nonideal biometrics-neural transducers 256 outputs, the construction of nomograms based error probability of 'own' and 'alien' from the mathematical expectation and standard deviation of the normalized measures Hamming.Keywords: modeling, errors, probability, biometrics, neural network, authentication
Procedia PDF Downloads 4822621 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 962620 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan
Authors: Souad Romdhane, Lotfi Belkacem
Abstract:
When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study
Procedia PDF Downloads 3592619 Consequences of Youth Bulge in Pakistan
Authors: Muhammad Farooq, Muhammad Idrees
Abstract:
The present study has been designed to explore the causes and effects of Youth Bulge in Pakistan. However, youth bulge is a part of population segment which create problem for the whole society. The youth bulge is a common phenomenon in many developing countries, and in particular, in the least developed countries. It is often due to a stage of development where a country achieves success in reducing infant mortality but mothers still have a high fertility rate. The result is that a large share of the population is comprised of children and young adults, and today’s children are tomorrow’s young adults. Youth often play a prominent role in political violence and the existence of a “youth bulge” has been associated with times of political crisis. The population pyramid of Pakistan represents a large youth proportion and our government did not use that youth in positive way and did not provide them opportunity for development, this situation creates frustration in youth that leads them towards conflict, unrest and violence. This study will be focus on the opportunity and motives of the youth bulge situation in Pakistan in the lens of youth bulge theory. Moreover, it will give some suggestions to utilize youth in the development activities and avoid youth bulge situation in Pakistan. The present research was conducted in the metropolitan entities of Punjab, Pakistan. A sample of 300 respondents was taken from three randomly selected metropolitan entities (Faisalabad, Lahore and Rawalpindi) of Punjab Province of Pakistan. Information regarding demography, household, locality and other socio-cultural variables related to causes and effects of youth bulge in the state was collected through a well structured interview schedule. Mean, Standard Deviation and frequency distribution were used to check the measure of central tendency. Multiple linear regression was also applied to measure the influence of various independent variables on the response variable.Keywords: youth bulge, violence, conflict, social unrest, crime, metropolitan entities, mean, standard deviation, multiple linear regression
Procedia PDF Downloads 4582618 Building an Arithmetic Model to Assess Visual Consistency in Townscape
Authors: Dheyaa Hussein, Peter Armstrong
Abstract:
The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.Keywords: townscape, urban design, visual assessment, visual consistency
Procedia PDF Downloads 312