Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38122

Search results for: time series data mining

29242 Human Resource Management Practices, Person-Environment Fit and Financial Performance in Brazilian Publicly Traded Companies

Authors: Bruno Henrique Rocha Fernandes, Amir Rezaee, Jucelia Appio

Abstract:

The relation between Human Resource Management (HRM) practices and organizational performance remains the subject of substantial literature. Though many studies demonstrated positive relationship, still major influencing variables are not yet clear. This study considers the Person-Environment Fit (PE Fit) and its components, Person-Supervisor (PS), Person-Group (PG), Person-Organization (PO) and Person-Job (PJ) Fit, as possible explanatory variables. We analyzed PE Fit as a moderator between HRM practices and financial performance in the “best companies to work” in Brazil. Data from HRM practices were classified through the High Performance Working Systems (HPWS) construct and data on PE-Fit were obtained through surveys among employees. Financial data, consisting of return on invested capital (ROIC) and price earnings ratio (PER) were collected for publicly traded best companies to work. Findings show that PO Fit and PJ Fit play a significant moderator role for PER but not for ROIC.

Keywords: financial performance, human resource management, high performance working systems, person-environment fit

Procedia PDF Downloads 155
29241 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: chronological sequence of discharges, recession curves, streamflow duration curves, water concession

Procedia PDF Downloads 168
29240 Closed Mitral Valvotomy: A Safe and Promising Procedure

Authors: Sushil Kumar Singh, Kumar Rahul, Vivek Tewarson, Sarvesh Kumar, Shobhit Kumar

Abstract:

Objective: Rheumatic mitral stenosis continues to be a major public health problem in developing countries. When the left atrium (LA) is unable to fill the left ventricle (LV) at normal LA pressures due to impaired relaxation and impaired compliance, diastolic dysfunction occurs. The assessment of left ventricular (LV) diastolic function and filling pressures is of clinical importance to identify underlying cardiac disease, its treatment, and to assess prognosis. 2D echocardiography can detect diastolic dysfunction with excellent sensitivity and minimal risk when compared to the gold standard of invasive pressure-volume measurements. Material and Method: This was a one-year study consisting of twenty-nine patients of isolated rheumatic severe mitral stenosis. Data was analyzed preoperative and post operative (at one month follow-up). Transthoracic 2D echocardiographic parameters of the diastolic function are transmitral flow, pulmonary venous flow, mitral annular tissue doppler, and color M-mode doppler. In our study, mitral valve orifice area, ejection fraction, deceleration time, E/A-wave, E/E’-wave, myocardial performance index of left ventricle (Tei index ), and Mitral inflow propagation velocity were included for echocardiographic evaluation. The statistical analysis was performed on SPSS Version 15.0 statistical analysis software. Result: Twenty-nine patients underwent successful closed mitral commissurotomy for isolated mitral stenosis. The outcome measures were observed pre-operatively and at one-month follow-up. The majority of patients were in NYHA grade III (69.0%) in the preoperative period, which improved to NYHA grade I (48.3%) after closed mitral commissurotomy. Post-surgery mitral valve area increased from 0.77 ± 0.13 to 2.32 ± 0.26 cm, ejection fraction increased from 61.38 ± 4.61 to 64.79 ± 3.22. There was a decrease in deceleration time from 231.55 ± 49.31 to 168.28 ± 14.30 ms, E/A ratio from 1.70 ± 0.54 from 0.89 ± 0.39, E/E’ ratio from 14.59 ± 3.34 to 8.86 ± 3.03. In addition, there was improvement in TIE index from 0.50 ± 0.03 to 0.39 ± 0.06 and mitral inflow propagation velocity from 47.28 ± 3.71 to 57.86 ± 3.19 cm/sec. In peri-operative and follow-up, there was no incidence of severe mitral regurgitation (MR). There was no thromboembolic incident and no mortality.

Keywords: closed mitral valvotomy, mitral stenosis, open mitral commissurotomy, balloon mitral valvotomy

Procedia PDF Downloads 71
29239 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 132
29238 Investigation of the Drying Times of Blood under Different Environmental Conditions and on Different Fabrics and the Transfer of Blood at Different Times of the Drying Process

Authors: Peter Parkinson

Abstract:

The research investigates the effects of temperature, humidity, wind speed, and fabric composition on the drying times of blood and assesses the degree of blood transfer that can occur during the drying process. An assortment of fabrics, of different composition and thicknesses, were collected and stained using two blood volumes and exposed to varying environmental conditions. The conclusion reached was that temperature, humidity, wind speed, and fabric thickness do have an effect on drying times. An increase in temperature and wind speed results in a decrease in drying times while an increase in fabric thickness and humidity extended the drying times of blood under similar conditions. Transfer experimentation utilized three donor fabrics, 100% white cotton, 100% acrylic, and 100% cotton denim, which were bloodstained using two blood volumes. The fabrics were subjected to both full and low/light force contact from the donor fabrics onto the recipient fabric, under different environmental conditions. Transfer times onto the 100% white cotton (recipient fabric) from all donor fabrics were shorter than the drying times observed. The intensities of the bloodstains decreased from high to low with time during the drying process. The degree of transfer at high, medium, and low intensities varied significantly between different materials and is dependent on the environmental conditions, fabric compositions, blood volumes, the type of contact (full or light force), and the drying times observed for the respective donor fabrics. These factors should be considered collectively and conservatively when assessing the time frame of secondary transfer in casework.

Keywords: blood, drying time, blood stain transfer, different environmental conditions, fabrics

Procedia PDF Downloads 136
29237 A New Distribution and Application on the Lifetime Data

Authors: Gamze Ozel, Selen Cakmakyapan

Abstract:

We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of real life data set.

Keywords: Marshall-Olkin distribution, Rayleigh distribution, estimation, maximum likelihood

Procedia PDF Downloads 488
29236 Comparative Study between the Absorbed Dose of 67ga-Ecc and 68ga-Ecc

Authors: H. Yousefnia, S. Zolghadri, S. Shanesazzadeh, A.Lahooti, A. R. Jalilian

Abstract:

In this study, 68Ga-ECC and 67Ga-ECC were both prepared with the radiochemical purity of higher than 97% in less than 30 min. The biodistribution data for 68Ga-ECC showed the extraction of the most of the activity from the urinary tract. The absorbed dose was estimated based on biodistribution data in mice by the medical internal radiation dose (MIRD) method. Comparison between human absorbed dose estimation for these two agents indicated the values of approximately ten-fold higher after injection of 67Ga-ECC than 68Ga-ECC in the most organs. The results showed that 68Ga-ECC can be considered as a more potential agent for renal imaging compared to 67Ga-ECC.

Keywords: effective absorbed dose, ethylenecysteamine cysteine, Ga-67, Ga-68

Procedia PDF Downloads 460
29235 Bridging Minds and Nature: Revolutionizing Elementary Environmental Education Through Artificial Intelligence

Authors: Hoora Beheshti Haradasht, Abooali Golzary

Abstract:

Environmental education plays a pivotal role in shaping the future stewards of our planet. Leveraging the power of artificial intelligence (AI) in this endeavor presents an innovative approach to captivate and educate elementary school children about environmental sustainability. This paper explores the application of AI technologies in designing interactive and personalized learning experiences that foster curiosity, critical thinking, and a deep connection to nature. By harnessing AI-driven tools, virtual simulations, and personalized content delivery, educators can create engaging platforms that empower children to comprehend complex environmental concepts while nurturing a lifelong commitment to protecting the Earth. With the pressing challenges of climate change and biodiversity loss, cultivating an environmentally conscious generation is imperative. Integrating AI in environmental education revolutionizes traditional teaching methods by tailoring content, adapting to individual learning styles, and immersing students in interactive scenarios. This paper delves into the potential of AI technologies to enhance engagement, comprehension, and pro-environmental behaviors among elementary school children. Modern AI technologies, including natural language processing, machine learning, and virtual reality, offer unique tools to craft immersive learning experiences. Adaptive platforms can analyze individual learning patterns and preferences, enabling real-time adjustments in content delivery. Virtual simulations, powered by AI, transport students into dynamic ecosystems, fostering experiential learning that goes beyond textbooks. AI-driven educational platforms provide tailored content, ensuring that environmental lessons resonate with each child's interests and cognitive level. By recognizing patterns in students' interactions, AI algorithms curate customized learning pathways, enhancing comprehension and knowledge retention. Utilizing AI, educators can develop virtual field trips and interactive nature explorations. Children can navigate virtual ecosystems, analyze real-time data, and make informed decisions, cultivating an understanding of the delicate balance between human actions and the environment. While AI offers promising educational opportunities, ethical concerns must be addressed. Safeguarding children's data privacy, ensuring content accuracy, and avoiding biases in AI algorithms are paramount to building a trustworthy learning environment. By merging AI with environmental education, educators can empower children not only with knowledge but also with the tools to become advocates for sustainable practices. As children engage in AI-enhanced learning, they develop a sense of agency and responsibility to address environmental challenges. The application of artificial intelligence in elementary environmental education presents a groundbreaking avenue to cultivate environmentally conscious citizens. By embracing AI-driven tools, educators can create transformative learning experiences that empower children to grasp intricate ecological concepts, forge an intimate connection with nature, and develop a strong commitment to safeguarding our planet for generations to come.

Keywords: artificial intelligence, environmental education, elementary children, personalized learning, sustainability

Procedia PDF Downloads 64
29234 Privacy Label: An Alternative Approach to Present Privacy Policies from Online Services to the User

Authors: Diego Roberto Goncalves De Pontes, Sergio Donizetti Zorzo

Abstract:

Studies show that most users do not read privacy policies from the online services they use. Some authors claim that one of the main causes of this is that policies are long and usually hard to understand, which make users lose interest in reading them. In this scenario, users may agree with terms without knowing what kind of data is being collected and why. Given that, we aimed to develop a model that would present the privacy policies contents in an easy and graphical way for the user to understand. We call it the Privacy Label. Using information recovery techniques, we propose an architecture that is able to extract information about what kind of data is being collected and to what end in the policies and show it to the user in an automated way. To assess our model, we calculated the precision, recall and f-measure metrics on the information extracted by our technique. The results for each metric were 68.53%, 85.61% e 76,13%, respectively, making it possible for the final user to understand which data was being collected without reading the whole policy. Also, our proposal can facilitate the notice-and-choice by presenting privacy policy information in an alternative way for online users.

Keywords: privacy, policies, user behavior, computer human interaction

Procedia PDF Downloads 292
29233 Logistic Regression Model versus Additive Model for Recurrent Event Data

Authors: Entisar A. Elgmati

Abstract:

Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.

Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event

Procedia PDF Downloads 617
29232 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment

Authors: Fares Laouacheria, Said Kechida, Moncef Chabi

Abstract:

The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.

Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model

Procedia PDF Downloads 258
29231 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.

Keywords: agriculture 4.0, agri-food suppy chain, industry 4.0, voluntary traceability

Procedia PDF Downloads 135
29230 Investigation of the Relationship between Personality Components and Tendency to Addiction to Domestic Violence

Authors: Mohamad Reza Khodabakhsh

Abstract:

Violence against women is a historical phenomenon; although its form and type are common in various societies and cultures, this type of violence occurs in terms of physical, psychological, financial, and sexual dimensions. This is the cause of many social deviations and endangers the center of the family as the most important institution. This research seeks to investigate the relationship between personality characteristics and the tendency to addiction to domestic violence. One hundred fifty women and one hundred fifty men were selected by the available sampling method. One hundred fifty men were admitted to drug addiction camps, and women included domestic violence cases. A questionnaire on addiction tendency, Five Personality Traits (NEO), and attitudes toward violence against women was used. Data were analyzed in descriptive and inferential statistics. The data were analyzed at the level of descriptive mean, mean, and standard deviation and analyzed using SPSS 20 software using correlation and analysis of variance at the level of inferential level. And the data were analyzed at the p≤0.05 significance level. The results showed that there is a significant relationship between personality traits and a tendency to addiction and domestic violence.

Keywords: personality, addiction, domestic violence, family

Procedia PDF Downloads 87
29229 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling

Authors: Sushma Ghogale

Abstract:

With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.

Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis

Procedia PDF Downloads 88
29228 Translanguaging In Preschools: New Evidence from Polish-English Bilingual Children

Authors: Judyta Pawliszko

Abstract:

The study draws on the theoretical framework of translanguaging. It investigates translanguaging patterns and how meaning-making processes among bilingual children in preschool are affected by using two different languages, 8 months of observation and 200 hours of vocal recordings of children (3-6 years old) provide data on bilingual children’s linguistic repertoire why children translanguage, and how they achieve understanding with the strategic use of the two languages. The data gathered point to translanguaging as a practice that maximizes meaning-making processes among preschool bilingual children.

Keywords: translanguaging, bilingualism, preschool, polish-english bilingual children

Procedia PDF Downloads 93
29227 Column Studies on Chromium(VI) Adsorption onto Kala Jamun (Syzygium cumini L.) Seed Powder

Authors: Sumi Deka, Krishna Gopal Bhattacharyya

Abstract:

This paper evaluate the industrial use of Kala Jamun (Syzygiumcumini L.) Seed powder (KSP) for the continuous adsorption of Cr(VI) in a column adsorption process. Adsorption of Cr(VI) onto Kala jamun (Syzygiumcumini L.) Seed Powder have been examined with the variation of (a) bed depth of the adsorbents, (b) flow rate of the adsorbents and (c) Cr(VI) concentration. The results showed that both the adsorption and the regeneration of the Cr(VI) onto Kala Jamun (Syzygiumcumini L.) seed Powder (KSP) can effectively occur in the column mode of adsorption. On increasing the bed depth, the adsorption of Cr(VI) onto KSP increases whereas on increasing the flow rate and the Cr(VI) concentration of KSP adsorption decreases. The results of the column studies were also fitted to Bed Depth Service Time (BDST) model. The BDST model was appropriate for designing the column for industrial purpose.

Keywords: bed-depth-service-time, continuous adsorption, Cr(VI), KSP

Procedia PDF Downloads 244
29226 Towards a Complete Automation Feature Recognition System for Sheet Metal Manufacturing

Authors: Bahaa Eltahawy, Mikko Ylihärsilä, Reino Virrankoski, Esko Petäjä

Abstract:

Sheet metal processing is automated, but the step from product models to the production machine control still requires human intervention. This may cause time consuming bottlenecks in the production process and increase the risk of human errors. In this paper we present a system, which automatically recognizes features from the CAD-model of the sheet metal product. By using these features, the system produces a complete model of the particular sheet metal product. Then the model is used as an input for the sheet metal processing machine. Currently the system is implemented, capable to recognize more than 11 of the most common sheet metal structural features, and the procedure is fully automated. This provides remarkable savings in the production time, and protects against the human errors. This paper presents the developed system architecture, applied algorithms and system software implementation and testing.

Keywords: feature recognition, automation, sheet metal manufacturing, CAD, CAM

Procedia PDF Downloads 340
29225 Distributed Automation System Based Remote Monitoring of Power Quality Disturbance on LV Network

Authors: Emmanuel D. Buedi, K. O. Boateng, Griffith S. Klogo

Abstract:

Electrical distribution networks are prone to power quality disturbances originating from the complexity of the distribution network, mode of distribution (overhead or underground) and types of loads used by customers. Data on the types of disturbances present and frequency of occurrence is needed for economic evaluation and hence finding solution to the problem. Utility companies have resorted to using secondary power quality devices such as smart meters to help gather the required data. Even though this approach is easier to adopt, data gathered from these devices may not serve the required purpose, since the installation of these devices in the electrical network usually does not conform to available PQM placement methods. This paper presents a design of a PQM that is capable of integrating into an existing DAS infrastructure to take advantage of available placement methodologies. The monitoring component of the design is implemented and installed to monitor an existing LV network. Data from the monitor is analyzed and presented. A portion of the LV network of the Electricity Company of Ghana is modeled in MATLAB-Simulink and analyzed under various earth fault conditions. The results presented show the ability of the PQM to detect and analyze PQ disturbance such as voltage sag and overvoltage. By adopting a placement methodology and installing these nodes, utilities are assured of accurate and reliable information with respect to the quality of power delivered to consumers.

Keywords: power quality, remote monitoring, distributed automation system, economic evaluation, LV network

Procedia PDF Downloads 336
29224 Doing Cause-and-Effect Analysis Using an Innovative Chat-Based Focus Group Method

Authors: Timothy Whitehill

Abstract:

This paper presents an innovative chat-based focus group method for collecting qualitative data to construct a cause-and-effect analysis in business research. This method was developed in response to the research and data collection challenges faced by the Covid-19 outbreak in the United Kingdom during 2020-21. This paper discusses the methodological approaches and builds a contemporary argument for its effectiveness in exploring cause-and-effect relationships in the context of focus group research, systems thinking and problem structuring methods. The pilot for this method was conducted between October 2020 and March 2021 and collected more than 7,000 words of chat-based data which was used to construct a consensus drawn cause-and-effect analysis. This method was developed in support of an ongoing Doctorate in Business Administration (DBA) thesis, which is using Design Science Research methodology to operationalize organisational resilience in UK construction sector firms.

Keywords: cause-and-effect analysis, focus group research, problem structuring methods, qualitative research, systems thinking

Procedia PDF Downloads 205
29223 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis

Procedia PDF Downloads 74
29222 Comparative Study of the Earth Land Surface Temperature Signatures over Ota, South-West Nigeria

Authors: Moses E. Emetere, M. L. Akinyemi

Abstract:

Agricultural activities in the South–West Nigeria are mitigated by the global increase in temperature. The unpredictive surface temperature of the area had increased health challenges amongst other social influence. The satellite data of surface temperatures were compared with the ground station Davis weather station. The differential heating of the lower atmosphere were represented mathematically. A numerical predictive model was propounded to forecast future surface temperature.

Keywords: numerical predictive model, surface temperature, satellite date, ground data

Procedia PDF Downloads 455
29221 An MIPSSTWM-based Emergency Vehicle Routing Approach for Quick Response to Highway Incidents

Authors: Siliang Luan, Zhongtai Jiang

Abstract:

The risk of highway incidents is commonly recognized as a major concern for transportation authorities due to the hazardous consequences and negative influence. It is crucial to respond to these unpredictable events as soon as possible faced by emergency management decision makers. In this paper, we focus on path planning for emergency vehicles, one of the most significant processes to avoid congestion and reduce rescue time. A Mixed-Integer Linear Programming with Semi-Soft Time Windows Model (MIPSSTWM) is conducted to plan an optimal routing respectively considering the time consumption of arcs and nodes of the urban road network and the highway network, especially in developing countries with an enormous population. Here, the arcs indicate the road segments and the nodes include the intersections of the urban road network and the on-ramp and off-ramp of the highway networks. An attempt in this research has been made to develop a comprehensive and executive strategy for emergency vehicle routing in heavy traffic conditions. The proposed Cuckoo Search (CS) algorithm is designed by imitating obligate brood parasitic behaviors of cuckoos and Lévy Flights (LF) to solve this hard and combinatorial problem. Using a Chinese city as our case study, the numerical results demonstrate the approach we applied in this paper outperforms the previous method without considering the nodes of the road network for a real-world situation. Meanwhile, the accuracy and validity of the CS algorithm also show better performances than the traditional algorithm.

Keywords: emergency vehicle, path planning, cs algorithm, urban traffic management and urban planning

Procedia PDF Downloads 65
29220 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 453
29219 Study on Connecting Method of Box Pontoons

Authors: Young-Jun You, Youn-Ju Jeong, Min-Su Park, Du-Ho Lee

Abstract:

Due to a lot of limited conditions, a large box type floating structure is inevitably constructed by connecting many pontoons. When a floating structure is made with concrete, concrete shear key with saw-teeth shape is often used to carry shear force. Match casting for the shear key and precise construction on a sea are very important for making separated two pontoons as one body but those are not easy work and may increase construction time and cost. To solve this problem, one-way shear key is studied in this paper for a connected part where there is some difference between upward and downward shear force. It has only one inclined plane and can resist shear force in one direction. Big shear force is resisted by concrete which forms an inclined plane and small shear force is resisted by steel bar. This system can reduce manufacturing cost of individual pontoon and construction time and cost for constructing a floating structure on a sea. In this paper, the feasibility study about one-way shear key system is performed by comparing with design example.

Keywords: connection, floating container terminal, pontoon, pre-stressing, shear key

Procedia PDF Downloads 307
29218 New NIR System for Detecting the Internal Disorder and Quality of Apple Fruit

Authors: Eid Alharbi, Yaser Miaji

Abstract:

The importance of fruit quality and freshness is potential in today’s life. Most recent studies show and automatic online sorting system according to the internal disorder for fresh apple fruit has developed by using near infrared (NIR) spectroscopic technology. The automatic conveyer belts system along with sorting mechanism was constructed. To check the internal quality of the apple fruit, apple was exposed to the NIR radiations in the range 650-1300nm and the data were collected in form of absorption spectra. The collected data were compared to the reference (data of known sample) analyzed and an electronic signal was pass to the sorting system. The sorting system was separate the apple fruit samples according to electronic signal passed to the system. It is found that absorption of NIR radiation in the range 930-950nm was higher in the internally defected samples as compared to healthy samples. On the base of this high absorption of NIR radiation in 930-950nm region the online sorting system was constructed.

Keywords: mechatronics design, NIR, fruit quality, spectroscopic technology

Procedia PDF Downloads 388
29217 Novel NIR System for Detection of Internal Disorder and Quality of Apple Fruit

Authors: Eid Alharbi, Yaser Miaji

Abstract:

The importance of fruit quality and freshness is potential in today’s life. Most recent studies show and automatic online sorting system according to the internal disorder for fresh apple fruit has developed by using near infrared (NIR) spectroscopic technology. The automatic conveyer belts system along with sorting mechanism was constructed. To check the internal quality of the apple fruit, apple was exposed to the NIR radiations in the range 650-1300nm and the data were collected in form of absorption spectra. The collected data were compared to the reference (data of known sample) analyzed and an electronic signal was pass to the sorting system. The sorting system was separate the apple fruit samples according to electronic signal passed to the system. It is found that absorption of NIR radiation in the range 930-950nm was higher in the internally defected samples as compared to healthy samples. On the base of this high absorption of NIR radiation in 930-950nm region the online sorting system was constructed.

Keywords: mechatronics design, NIR, fruit quality, spectroscopic technology

Procedia PDF Downloads 376
29216 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting

Authors: Daijun Chen

Abstract:

Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.

Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits

Procedia PDF Downloads 91
29215 Active Features Determination: A Unified Framework

Authors: Meenal Badki

Abstract:

We address the issue of active feature determination, where the objective is to determine the set of examples on which additional data (such as lab tests) needs to be gathered, given a large number of examples with some features (such as demographics) and some examples with all the features (such as the complete Electronic Health Record). We note that certain features may be more costly, unique, or laborious to gather. Our proposal is a general active learning approach that is independent of classifiers and similarity metrics. It allows us to identify examples that differ from the full data set and obtain all the features for the examples that match. Our comprehensive evaluation shows the efficacy of this approach, which is driven by four authentic clinical tasks.

Keywords: feature determination, classification, active learning, sample-efficiency

Procedia PDF Downloads 57
29214 Banana Peels as an Eco-Sorbent for Manganese Ions

Authors: M. S. Mahmoud

Abstract:

This study was conducted to evaluate the manganese removal from aqueous solution using Banana peels activated carbon (BPAC). Batch experiments have been carried out to determine the influence of parameters such as pH, biosorbent dose, initial metal ion concentrations and contact times on the biosorption process. From these investigations, a significant increase in percentage removal of manganese 97.4 % is observed at pH value 5.0, biosorbent dose 0.8 g, initial concentration 20 ppm, temperature 25 ± 2 °C, stirring rate 200 rpm and contact time 2 h. The equilibrium concentration and the adsorption capacity at equilibrium of the experimental results were fitted to the Langmuir and Freundlich isotherm models; the Langmuir isotherm was found to well represent the measured adsorption data implying BPAC had heterogeneous surface. A raw groundwater samples were collected from Baharmos groundwater treatment plant network at Embaba and Manshiet Elkanater City/District-Giza, Egypt, for treatment at the best conditions that reached at first phase by BPAC. The treatment with BPAC could reduce iron and manganese value of raw groundwater by 91.4 % and 97.1 %, respectively and the effect of the treatment process on the microbiological properties of groundwater sample showed decrease of total bacterial count either at 22°C or at 37°C to 85.7 % and 82.4 %, respectively. Also, BPAC was characterized using SEM and FTIR spectroscopy.

Keywords: biosorption, banana peels, isothermal models, manganese

Procedia PDF Downloads 358
29213 Charter versus District Schools and Student Achievement: Implications for School Leaders

Authors: Kara Rosenblatt, Kevin Badgett, James Eldridge

Abstract:

There is a preponderance of information regarding the overall effectiveness of charter schools and their ability to increase academic achievement compared to traditional district schools. Most research on the topic is focused on comparing long and short-term outcomes, academic achievement in mathematics and reading, and locale (i.e., urban, v. Rural). While the lingering unanswered questions regarding effectiveness continue to loom for school leaders, data on charter schools suggests that enrollment increases by 10% annually and that charter schools educate more than 2 million U.S. students across 40 states each year. Given the increasing share of U.S. students educated in charter schools, it is important to better understand possible differences in student achievement defined in multiple ways for students in charter schools and for those in Independent School District (ISD) settings in the state of Texas. Data were retrieved from the Texas Education Agency’s (TEA) repository that includes data organized annually and available on the TEA website. Specific data points and definitions of achievement were based on characterizations of achievement found in the relevant literature. Specific data points include but were not limited to graduation rate, student performance on standardized testing, and teacher-related factors such as experience and longevity in the district. Initial findings indicate some similarities with the current literature on long-term student achievement in English/Language Arts; however, the findings differ substantially from other recent research related to long-term student achievement in social studies. There are a number of interesting findings also related to differences between achievement for students in charters and ISDs and within different types of charter schools in Texas. In addition to findings, implications for leadership in different settings will be explored.

Keywords: charter schools, ISDs, student achievement, implications for PK-12 school leadership

Procedia PDF Downloads 115