Search results for: data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24724

Search results for: data sets

24424 Adult Language Learning in the Institute of Technology Sector in the Republic of Ireland

Authors: Una Carthy

Abstract:

A recent study of third level institutions in Ireland reveals that both age and aptitude can be overcome by teaching methodologies to motivate second language learners. This PhD investigation gathered quantitative and qualitative data from 14 Institutes of Technology over a three years period from 2011 to 2014. The fundamental research question was to establish the impact of institutional language policy on attitudes towards language learning. However, other related issues around second language acquisition arose in the course of the investigation. Data were collected from both lectures and students, allowing interesting points of comparison to emerge from both datasets. Negative perceptions among lecturers regarding language provision were often associated with the view that language learning belongs to primary and secondary level and has no place in third level education. This perception was offset by substantial data showing positive attitudes towards adult language learning. Lenneberg’s Critical Age Theory postulated that the optimum age for learning a second language is before puberty. More recently, scholars have challenged this theory in their studies, revealing that mature learners can and do succeed at learning languages. With regard to aptitude, a preoccupation among lecturers regarding poor literacy skills among students emerged and was often associated with resistance to second language acquisition. This was offset by a preponderance of qualitative data from students highlighting the crucial role which teaching approaches play in the learning process. Interestingly, the data collected regarding learning disabilities reveals that, given the appropriate learning environments, individuals can be motivated to acquire second languages, and indeed succeed at learning them. These findings are in keeping with other recent studies regarding attitudes towards second language learning among students with learning disabilities. Both sets of findings reinforce the case for language policies in the Institute of Technology (IoTs). Supportive and positive learning environments can be created in third level institutions to motivate adult learners, thereby overcoming perceived obstacles relating to age and aptitude.

Keywords: age, aptitude, second language acquisition, teaching methodologies

Procedia PDF Downloads 97
24423 A Qualitative South African Study on Exploration of the Moral Identity of Nurses

Authors: Yolanda Havenga

Abstract:

Being a competent nurse requires clinical, general, and moral competencies. Moral competence is a culmination of moral perceptions, moral judgment, moral behaviour, and moral identity. Moral identity is the values, images, and fundamental principles held in the collective minds and memories of nurses about what it means to be a ‘good nurse’. It is important to explore and describe South African nurses’ moral identities and excavate the post-colonial counter-narrative to nurses moral identities as a better understanding of these identities will enable means to positively address nurses’ moral behaviours. This study explored the moral identity of nurses within the South African context. A qualitative approach was followed triangulating with phenomenological and narrative designs with the same purposively sampled group of professional nurses. In-depth interviews were conducted until saturation of data occurred about the sampled nurses lived experiences of being a nurse in South Africa. They were probed about their core personal-, social-, and professional values. Data were analysed based on the steps used by Colaizzi. These nurses were then asked to write a narrative telling a personal story that portrayed a significant time in their professional career that defines their identity as a nurse. This data were analysed using a critical narrative approach and findings of the two sets of data were merged. Ethical approval was obtained and approval from all relevant gate keepers. In the findings, themes emerged related to personal, social and professional values, images and fundamental principles of being a nurse within the South African context. The findings of this study will inform a future national study including a representative sample of South African nurses.

Keywords: moral behaviour, moral identity, nurses, qualitative research

Procedia PDF Downloads 255
24422 Optimal Classifying and Extracting Fuzzy Relationship from Query Using Text Mining Techniques

Authors: Faisal Alshuwaier, Ali Areshey

Abstract:

Text mining techniques are generally applied for classifying the text, finding fuzzy relations and structures in data sets. This research provides plenty text mining capabilities. One common application is text classification and event extraction, which encompass deducing specific knowledge concerning incidents referred to in texts. The main contribution of this paper is the clarification of a concept graph generation mechanism, which is based on a text classification and optimal fuzzy relationship extraction. Furthermore, the work presented in this paper explains the application of fuzzy relationship extraction and branch and bound method to simplify the texts.

Keywords: extraction, max-prod, fuzzy relations, text mining, memberships, classification, memberships, classification

Procedia PDF Downloads 548
24421 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 238
24420 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 297
24419 The Impact of the Great Irish Famine on Irish Mass Migration to the United States at the Turn of the Twentieth Century

Authors: Gayane Vardanyan, Gaia Narciso, Battista Severgnini

Abstract:

This paper investigates the long-run impact of the Great Irish Famine on emigration from Ireland at the turn of the twentieth century. To do it we combine the 1901 and the 1911 Irish Census data sets with the Ellis Island Administrative Records on Irish migrants to the United States. We find that the migrants were more likely to be Catholic, literate, unmarried, young and Gaelic speaking compared to the ones that stay. Running individual level specifications, our preliminary findings suggest that being born in a place where the Famine was more severe increases the probability of becoming a migrant in the long-run. We also intend to explore the mechanisms through which this impact occurs.

Keywords: Great Famine, mass migration, long-run impact, mechanisms

Procedia PDF Downloads 216
24418 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 133
24417 Input and Interaction as Training for Cognitive Learning: Variation Sets Influence the Sudden Acquisition of Periphrastic estar 'to be' + verb + -ndo*

Authors: Mary Rosa Espinosa-Ochoa

Abstract:

Some constructions appear suddenly in children’s speech and are productive from the beginning. These constructions are supported by others, previously acquired, with which they share semantic and pragmatic features. Thus, for example, the acquisition of the passive voice in German is supported by other constructions with which it shares the lexical verb sein (“to be”). This also occurs in Spanish, in the acquisition of the progressive aspectual periphrasis estar (“to be”) + verb root + -ndo (present participle), supported by locative constructions acquired earlier with the same verb. The periphrasis shares with the locative constructions not only the lexical verb estar, but also pragmatic relations. Both constructions can be used to answer the question ¿Dónde está? (“Where is he/she/it?”), whose answer could be either Está aquí (“He/she/it is here”) or Se está bañando (“He/she/it is taking a bath”).This study is a corpus-based analysis of two children (1;08-2;08) and the input directed to them: it proposes that the pragmatic and semantic support from previously-acquired constructions comes from the input, during interaction with others. This hypothesis is based on analysis of constructions with estar, whose use to express temporal change (which differentiates it from its counterpart ser [“to be”]), is given in variation sets, similar to those described by Küntay and Slobin (2002), that allow the child to perceive the change of place experienced by nouns that function as its grammatical subject. For example, at different points during a bath, the mother says: El jabón está aquí “The soap is here” (beginning of bath); five minutes later, the soap has moved, and the mother says el jabón está ahí “the soap is there”; the soap moves again later on and she says: el jabón está abajo de ti “the soap is under you”. “The soap” is the grammatical subject of all of these utterances. The Spanish verb + -ndo is a progressive phase aspect encoder of a dynamic state that generates a token. The verb + -ndo is also combined with verb estar to encode. It is proposed here that the phases experienced in interaction with the adult, in events related to the verb estar, allow a child to generate this dynamicity and token reading of the verb + -ndo. In this way, children begin to produce the periphrasis suddenly and productively, even though neither the periphrasis nor the verb + -ndo itself are frequent in adult speech.

Keywords: child language acquisition, input, variation sets, Spanish language

Procedia PDF Downloads 119
24416 Co-Integration and Error Correction Mechanism of Supply Response of Sugarcane in Pakistan (1980-2012)

Authors: Himayatullah Khan

Abstract:

This study estimates supply response function of sugarcane in Pakistan from 1980-81 to 2012-13. The study uses co-integration approach and error correction mechanism. Sugarcane production, area and price series were tested for unit root using Augmented Dickey Fuller (ADF). The study found that these series were stationary at their first differenced level. Using the Augmented Engle-Granger test and Cointegrating Regression Durbin-Watson (CRDW) test, the study found that “production and price” and “area and price” were co-integrated suggesting that the two sets of time series had long-run or equilibrium relationship. The results of the error correction models for the two sets of series showed that there was disequilibrium in the short run there may be disequilibrium. The Engle-Granger residual may be thought of as the equilibrium error which can be used to tie the short-run behavior of the dependent variable to its long-run value. The Granger-Causality test results showed that log of price granger caused both the long of production and log of area whereas, the log of production and log of area Granger caused each other.

Keywords: co-integration, error correction mechanism, Granger-causality, sugarcane, supply response

Procedia PDF Downloads 410
24415 Determining the Number of Single Models in a Combined Forecast

Authors: Serkan Aras, Emrah Gulay

Abstract:

Combining various forecasting models is an important tool for researchers to attain more accurate forecasts. A great number of papers have shown that selecting single models as dissimilar models, or methods based on different information as possible leads to better forecasting performances. However, there is not a certain rule regarding the number of single models to be used in any combining methods. This study focuses on determining the optimal or near optimal number for single models with the help of statistical tests. An extensive experiment is carried out by utilizing some well-known time series data sets from diverse fields. Furthermore, many rival forecasting methods and some of the commonly used combining methods are employed. The obtained results indicate that some statistically significant performance differences can be found regarding the number of the single models in the combining methods under investigation.

Keywords: combined forecast, forecasting, M-competition, time series

Procedia PDF Downloads 328
24414 Processing of Input Material as a Way to Improve the Efficiency of the Glass Production Process

Authors: Joanna Rybicka-Łada, Magda Kosmal, Anna Kuśnierz

Abstract:

One of the main problems of the glass industry is the still high consumption of energy needed to produce glass mass, as well as the increase in prices, fuels, and raw materials. Therefore, comprehensive actions are taken to improve the entire production process. The key element of these activities, starting from filling the set to receiving the finished product, is the melting process, whose task is, among others, dissolving the components of the set, removing bubbles from the resulting melt, and obtaining a chemically homogeneous glass melt. This solution avoids dust formation during filling and is available on the market. This process consumes over 90% of the total energy needed in the production process. The processes occurring in the set during its conversion have a significant impact on the further stages and speed of the melting process and, thus, on its overall effectiveness. The speed of the reactions occurring and their course depend on the chemical nature of the raw materials, the degree of their fragmentation, thermal treatment as well as the form of the introduced set. An opportunity to minimize segregation and accelerate the conversion of glass sets may be the development of new technologies for preparing and dosing sets. The previously preferred traditional method of melting the set, based on mixing all glass raw materials together in loose form, can be replaced with a set in a thickened form. The aim of the project was to develop a glass set in a selectively or completely densified form and to examine the influence of set processing on the melting process and the properties of the glass.

Keywords: glass, melting process, glass set, raw materials

Procedia PDF Downloads 33
24413 Presenting a Model Based on Artificial Neural Networks to Predict the Execution Time of Design Projects

Authors: Hamed Zolfaghari, Mojtaba Kord

Abstract:

After feasibility study the design phase is started and the rest of other phases are highly dependent on this phase. forecasting the duration of design phase could do a miracle and would save a lot of time. This study provides a fast and accurate Machine learning (ML) and optimization framework, which allows a quick duration estimation of project design phase, hence improving operational efficiency and competitiveness of a design construction company. 3 data sets of three years composed of daily time spent for different design projects are used to train and validate the ML models to perform multiple projects. Our study concluded that Artificial Neural Network (ANN) performed an accuracy of 0.94.

Keywords: time estimation, machine learning, Artificial neural network, project design phase

Procedia PDF Downloads 49
24412 Energy Efficient Clustering with Adaptive Particle Swarm Optimization

Authors: KumarShashvat, ArshpreetKaur, RajeshKumar, Raman Chadha

Abstract:

Wireless sensor networks have principal characteristic of having restricted energy and with limitation that energy of the nodes cannot be replenished. To increase the lifetime in this scenario WSN route for data transmission is opted such that utilization of energy along the selected route is negligible. For this energy efficient network, dandy infrastructure is needed because it impinges the network lifespan. Clustering is a technique in which nodes are grouped into disjoints and non–overlapping sets. In this technique data is collected at the cluster head. In this paper, Adaptive-PSO algorithm is proposed which forms energy aware clusters by minimizing the cost of locating the cluster head. The main concern is of the suitability of the swarms by adjusting the learning parameters of PSO. Particle Swarm Optimization converges quickly at the beginning stage of the search but during the course of time, it becomes stable and may be trapped in local optima. In suggested network model swarms are given the intelligence of the spiders which makes them capable enough to avoid earlier convergence and also help them to escape from the local optima. Comparison analysis with traditional PSO shows that new algorithm considerably enhances the performance where multi-dimensional functions are taken into consideration.

Keywords: Particle Swarm Optimization, adaptive – PSO, comparison between PSO and A-PSO, energy efficient clustering

Procedia PDF Downloads 220
24411 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 129
24410 The Effect of Strength Training and Consumption of Glutamine Supplement on GH/IGF1 Axis

Authors: Alireza Barari

Abstract:

Physical activity and diet are factors that influence the body's structure. The purpose of this study was to compare the effects of four weeks of resistance training, and glutamine supplement consumption on growth hormone (GH), and Insulin-like growth factor 1 (IGF-1) Axis. 40 amateur male bodybuilders, participated in this study. They were randomly divided into four equal groups, Resistance (R), Glutamine (G), Resistance with Glutamine (RG), and Control (C). The R group was assigned to a four week resistance training program, three times/week, three sets of 10 exercises with 6-10 repetitions, at the 80-95% 1RM (One Repetition Maximum), with 120 seconds rest between sets), G group is consuming l-glutamine (0.1 g/kg-1/day-1), RG group resistance training with consuming L-glutamine, and C group continued their normal lifestyle without exercise training. GH, IGF1, IGFBP-III plasma levels were measured before and after the protocol. One-way ANOVA indicated significant change in GH, IGF, and IGFBP-III between the four groups, and the Tukey test demonstrated significant increase in GH, IGF1, IGFBP-III plasma levels in R, and RG group. Based upon these findings, we concluded that resistance training at 80-95% 1RM intensity, and resistance training along with oral glutamine shows significantly increase secretion of GH, IGF-1, and IGFBP-III in amateur males, but the addition of oral glutamine to the exercise program did not show significant difference in GH, IGF-1, and IGFBP-III.

Keywords: strength, glutamine, growth hormone, insulin-like growth factor 1

Procedia PDF Downloads 279
24409 Thermoelectric Cooler As A Heat Transfer Device For Thermal Conductivity Test

Authors: Abdul Murad Zainal Abidin, Azahar Mohd, Nor Idayu Arifin, Siti Nor Azila Khalid, Mohd Julzaha Zahari Mohamad Yusof

Abstract:

A thermoelectric cooler (TEC) is an electronic component that uses ‘peltier’ effect to create a temperature difference by transferring heat between two electrical junctions of two different types of materials. TEC can also be used for heating by reversing the electric current flow and even power generation. A heat flow meter (HFM) is an equipment for measuring thermal conductivity of building materials. During the test, water is used as heat transfer medium to cool the HFM. The existing re-circulating cooler in the market is very costly, and the alternative is to use piped tap water to extract heat from HFM. However, the tap water temperature is insufficiently low to enable heat transfer to take place. The operating temperature for isothermal plates in the HFM is 40°C with the range of ±0.02°C. When the temperature exceeds the operating range, the HFM stops working, and the test cannot be conducted. The aim of the research is to develop a low-cost but energy-efficient TEC prototype that enables heat transfer without compromising the function of the HFM. The objectives of the research are a) to identify potential of TEC as a cooling device by evaluating its cooling rate and b) to determine the amount of water savings using TEC compared to normal tap water. Four (4) peltier sets were used, with two (2) sets used as pre-cooler. The cooling water is re-circulated from the reservoir into HFM using a water pump. The thermal conductivity readings, the water flow rate, and the power consumption were measured while the HFM was operating. The measured data has shown decrease in average cooling temperature difference (ΔTave) of 2.42°C and average cooling rate of 0.031°C/min. The water savings accrued from using the TEC is projected to be 8,332.8 litres/year with the application of water re-circulation. The results suggest the prototype has achieved required objectives. Further research will include comparing the cooling rate of TEC prototype against conventional tap water and to optimize its design and performance in terms of size and portability. The possible application of the prototype could also be expanded to portable storage for medicine and beverages.

Keywords: energy efficiency, thermoelectric cooling, pre-cooling device, heat flow meter, sustainable technology, thermal conductivity

Procedia PDF Downloads 135
24408 Damage Identification Using Experimental Modal Analysis

Authors: Niladri Sekhar Barma, Satish Dhandole

Abstract:

Damage identification in the context of safety, nowadays, has become a fundamental research interest area in the field of mechanical, civil, and aerospace engineering structures. The following research is aimed to identify damage in a mechanical beam structure and quantify the severity or extent of damage in terms of loss of stiffness, and obtain an updated analytical Finite Element (FE) model. An FE model is used for analysis, and the location of damage for single and multiple damage cases is identified numerically using the modal strain energy method and mode shape curvature method. Experimental data has been acquired with the help of an accelerometer. Fast Fourier Transform (FFT) algorithm is applied to the measured signal, and subsequently, post-processing is done in MEscopeVes software. The two sets of data, the numerical FE model and experimental results, are compared to locate the damage accurately. The extent of the damage is identified via modal frequencies using a mixed numerical-experimental technique. Mode shape comparison is performed by Modal Assurance Criteria (MAC). The analytical FE model is adjusted by the direct method of model updating. The same study has been extended to some real-life structures such as plate and GARTEUR structures.

Keywords: damage identification, damage quantification, damage detection using modal analysis, structural damage identification

Procedia PDF Downloads 84
24407 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 125
24406 Experiences of Online Opportunities and Risks: Examining Internet Use and Digital Literacy of Young People in Nigeria

Authors: Isah Yahaya Aliyu

Abstract:

Research on Internet use has often approached beneficial uses (online opportunities) of the Internet as separate from the risky encounters (online risks) of young people online. However, empirical evidence from diverse contexts appears to increasingly support the fusion of the two sets of online activities. Hence, the current research investigates the correlation between Internet use (IU) and digital literacy (DL) with online opportunities (OP) and risks (OR), using data from a Nigerian context, where there appears a paucity of research and literature on integrating opportunities and risks in the same study. A web-based data collection method was used to administer a survey to 335 undergraduate students in Northeastern Nigeria. Underpinned to Livingstone and Helsper model, findings are largely consistent with existing literature; IU and DL influence OP (R2 = 0.791, SE = 0.265, F-Stats = 626.566, P-value <.001), equally IU and DL influence OR as well (R2 = 0.343, SE = 0.465, F-Stats = 86.671, P-value <.001). OP and OR were found to strongly correlate positively (r = .667, n = 335, p < 0.01). This study has provided buttressing evidence from a Nigerian context of the fusion of benefits and risks of the Internet among young people. It has also upheld the argument for improved literacy as strategy for minimizing risks/harm rather than restricting use. Other theoretical and policy implications of the findings have been discussed in line with local and global debates about the Internet and its attendant effects.

Keywords: digital, internet, literacy, opportunities, risks

Procedia PDF Downloads 50
24405 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 219
24404 Fractal-Wavelet Based Techniques for Improving the Artificial Neural Network Models

Authors: Reza Bazargan lari, Mohammad H. Fattahi

Abstract:

Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for pre-processing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based pre-processing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.

Keywords: wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN

Procedia PDF Downloads 343
24403 Constructing a Physics Guided Machine Learning Neural Network to Predict Tonal Noise Emitted by a Propeller

Authors: Arthur D. Wiedemann, Christopher Fuller, Kyle A. Pascioni

Abstract:

With the introduction of electric motors, small unmanned aerial vehicle designers have to consider trade-offs between acoustic noise and thrust generated. Currently, there are few low-computational tools available for predicting acoustic noise emitted by a propeller into the far-field. Artificial neural networks offer a highly non-linear and adaptive model for predicting isolated and interactive tonal noise. But neural networks require large data sets, exceeding practical considerations in modeling experimental results. A methodology known as physics guided machine learning has been applied in this study to reduce the required data set to train the network. After building and evaluating several neural networks, the best model is investigated to determine how the network successfully predicts the acoustic waveform. Lastly, a post-network transfer function is developed to remove discontinuity from the predicted waveform. Overall, methodologies from physics guided machine learning show a notable improvement in prediction performance, but additional loss functions are necessary for constructing predictive networks on small datasets.

Keywords: aeroacoustics, machine learning, propeller, rotor, neural network, physics guided machine learning

Procedia PDF Downloads 180
24402 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 33
24401 Implementation of an IoT Sensor Data Collection and Analysis Library

Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee

Abstract:

Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.

Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data

Procedia PDF Downloads 344
24400 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles

Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis

Abstract:

Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.

Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review

Procedia PDF Downloads 128
24399 Measuring Energy Efficiency Performance of Mena Countries

Authors: Azam Mohammadbagheri, Bahram Fathi

Abstract:

DEA has become a very popular method of performance measure, but it still suffers from some shortcomings. One of these shortcomings is the issue of having multiple optimal solutions to weights for efficient DMUs. The cross efficiency evaluation as an extension of DEA is proposed to avoid this problem. Lam (2010) is also proposed a mixed-integer linear programming formulation based on linear discriminate analysis and super efficiency method (MILP model) to avoid having multiple optimal solutions to weights. In this study, we modified MILP model to determine more suitable weight sets and also evaluate the energy efficiency of MENA countries as an application of the proposed model.

Keywords: data envelopment analysis, discriminate analysis, cross efficiency, MILP model

Procedia PDF Downloads 660
24398 Massachusetts Homeschool Policy: An Interpretive Analysis of Homeschool Regulation and Oversight

Authors: Lauren Freed

Abstract:

This research proposal outlines an examination of homeschool oversight in the Massachusetts educational system amid the backdrop of ideological differences between various parties with contributing interests. This mixed methodology study will follow an interpretive policy research approach, involving the use of existing data, surveys, and focus groups. The aim is to capture distinct sets of meanings, values, feelings, and beliefs by principal stakeholders, while exploring the ways in which they/each interact with, interpret, and implement homeschool guidelines set forth by the Massachusetts Supreme Judicial Court Decision Care and Protection of Charles (1987). This analysis will identify and contextualize the attitudes, administrative choices, financial implications, and educational impacts that result from the process and practice of enacting current homeschool oversight policy in Massachusetts. The following question will guide this study: How do districts, homeschooling parents, and Massachusetts Department of Elementary and Secondary Education (DESE) regulate, fund, collect, interpret, implement and report Massachusetts homeschool oversight policy? The resulting analysis will produce a unique and original baseline snapshot of qualitative and quantifiable point-in-time data based on the registered homeschool population in the state of Massachusetts.

Keywords: alternative education, homeschooling, home education, home schooling policy

Procedia PDF Downloads 157
24397 Chronic and Sub-Acute Lumbosacral Radiculopathies Behave Differently to Repeated Back Extension Exercises

Authors: Sami Alabdulwahab

Abstract:

Background: Repeated back extension exercises (RBEEs) are among the management options for symptoms associated with lumbosacral radiculopathy (LSR). RBEEs have been reported to cause changes in the distribution and intensity of radicular symptoms caused by possible compression/decompression of the compromised nerve root. Purpose: The purpose of this study was to investigate the effects of the RBEEs on the neurophysiology of the compromised nerve root and on standing mobility and pain intensity in patients with sub-acute and chronic LSR. Methods: A total of 40 patients with unilateral sub-acute/chronic lumbosacral radiculopathy voluntarily participated in the study; the patients performed 3 sets of 10 RBEEs in the prone position with 1 min of rest between the sets. The soleus H-reflex, standing mobility and pain intensity were recorded before and after the RBEEs. Results: The results of the study showed that the RBEEs significantly improved the H-reflex, standing mobility and pain intensity in patients with sub-acute LSR (p<0.01); there was not a significant improvement in the patients with chronic LSR (p<0.61). Conclusion: RBEEs in prone position is recommended for improving the neurophysiological function of the compromised nerve root and standing mobility in patients with sub-acute LSR. Implication: Sub-acute and chronic LSR responded differently to RBEEs. Sub-acute LSR appear to have flexible and movable disc structures, which could be managed with RBEEs.

Keywords: h-reflex, back extension, lumbosacral radiculopathy, pain

Procedia PDF Downloads 451
24396 Group Consensus of Hesitant Fuzzy Linguistic Variables for Decision-Making Problem

Authors: Chen T. Chen, Hui L. Cheng

Abstract:

Due to the different knowledge, experience and expertise of experts, they usually provide the different opinions in the group decision-making process. Therefore, it is an important issue to reach the group consensus of opinions of experts in group multiple-criteria decision-making (GMCDM) process. Because the subjective opinions of experts always are fuzziness and uncertainties, it is difficult to use crisp values to describe the real opinions of experts or decision-makers. It is reasonable for experts to use the linguistic variables to express their opinions. The hesitant fuzzy set are extended from the concept of fuzzy sets. Experts use the hesitant fuzzy sets can be flexible to describe their subjective opinions. In order to aggregate the hesitant fuzzy linguistic variables of all experts effectively, an adjustment method based on distance function will be presented in this paper. Based on the opinions adjustment method, this paper will present an effective approach to adjust the hesitant fuzzy linguistic variables of all experts to reach the group consensus. Then, a new hesitant linguistic GMCDM method will be presented based on the group consensus of hesitant fuzzy linguistic variables. Finally, an example will be implemented to illustrate the computational process to enhance the practical value of the proposed model.

Keywords: group multi-criteria decision-making, linguistic variables, hesitant fuzzy linguistic variables, distance function, group consensus

Procedia PDF Downloads 125
24395 Simulating the Hot Hand Phenomenon in Basketball with Bayesian Hidden Markov Models

Authors: Gabriel Calvo, Carmen Armero, Luigi Spezia

Abstract:

A basketball player is said to have a hot hand if his/her performance is better than expected in different periods of time. A way to deal with this phenomenon is to make use of latent variables, which can indicate whether the player is ‘on fire’ or not. This work aims to model the hot hand phenomenon through a Bayesian hidden Markov model (HMM) with two states (cold and hot) and two different probability of success depending on the corresponding hidden state. This task is illustrated through a comprehensive simulation study. The simulated data sets emulate the field goal attempts in an NBA season from different profile players. This model can be a powerful tool to assess the ‘streakiness’ of each player, and it provides information about the general performance of the players during the match. Finally, the Bayesian HMM allows computing the posterior probability of any type of streak.

Keywords: Bernoulli trials, field goals, latent variables, posterior distribution

Procedia PDF Downloads 138