Search results for: data reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28325

Search results for: data reduction

26645 Design and Development of Tandem Dynamometer for Testing and Validation of Motor Performance Parameters

Authors: Vedansh More, Lalatendu Bal, Ronak Panchal, Atharva Kulkarni

Abstract:

The project aims at developing a cost-effective test bench capable of testing and validating the complete powertrain package of an electric vehicle. Emrax 228 high voltage synchronous motor was selected as the prime mover for study. A tandem type dynamometer comprising of two loading methods; inertial, using standard inertia rollers and absorptive, using a separately excited DC generator with resistive coils was developed. The absorptive loading of the prime mover was achieved by implementing a converter circuit through which duty of the input field voltage level was controlled. This control was efficacious in changing the magnetic flux and hence the generated voltage which was ultimately dropped across resistive coils assembled in a load bank with all parallel configuration. The prime mover and loading elements were connected via a chain drive with a 2:1 reduction ratio which allows flexibility in placement of components and a relaxed rating of the DC generator. The development will aid in determination of essential characteristics like torque-RPM, power-RPM, torque factor, RPM factor, heat loads of devices and battery pack state of charge efficiency but also provides a significant financial advantage over existing versions of dynamometers with its cost-effective solution.

Keywords: absorptive load, chain drive, chordal action, DC generator, dynamometer, electric vehicle, inertia rollers, load bank, powertrain, pulse width modulation, reduction ratio, road load, testbench

Procedia PDF Downloads 221
26644 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 431
26643 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems

Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare

Abstract:

Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.

Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM

Procedia PDF Downloads 280
26642 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization

Authors: K. Umbleja, M. Ichino, H. Yaguchi

Abstract:

In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.

Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data

Procedia PDF Downloads 165
26641 Uncertainty in Building Energy Performance Analysis at Different Stages of the Building’s Lifecycle

Authors: Elham Delzendeh, Song Wu, Mustafa Al-Adhami, Rima Alaaeddine

Abstract:

Over the last 15 years, prediction of energy consumption has become a common practice and necessity at different stages of the building’s lifecycle, particularly, at the design and post-occupancy stages for planning and maintenance purposes. This is due to the ever-growing response of governments to address sustainability and reduction of CO₂ emission in the building sector. However, there is a level of uncertainty in the estimation of energy consumption in buildings. The accuracy of energy consumption predictions is directly related to the precision of the initial inputs used in the energy assessment process. In this study, multiple cases of large non-residential buildings at design, construction, and post-occupancy stages are investigated. The energy consumption process and inputs, and the actual and predicted energy consumption of the cases are analysed. The findings of this study have pointed out and evidenced various parameters that cause uncertainty in the prediction of energy consumption in buildings such as modelling, location data, and occupant behaviour. In addition, unavailability and insufficiency of energy-consumption-related inputs at different stages of the building’s lifecycle are classified and categorized. Understanding the roots of uncertainty in building energy analysis will help energy modellers and energy simulation software developers reach more accurate energy consumption predictions in buildings.

Keywords: building lifecycle, efficiency, energy analysis, energy performance, uncertainty

Procedia PDF Downloads 132
26640 The Impact of Data Science on Geography: A Review

Authors: Roberto Machado

Abstract:

We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.

Keywords: data science, geography, systematic review, optimization algorithms, supervised learning

Procedia PDF Downloads 16
26639 Current Account on Teaching Psychology and Career Psychology in Portuguese Higher Education

Authors: Sivia Amado Cordeiro, Bruna Rodrigues, Maria Do Ceu Taveira, Catia Marques, Iris Oliveira, Ana Daniela Silva, Cristina Costa-Lobo

Abstract:

This work intends to analyse the teaching of Psychology in Portugal and, particularly, the teaching of Career Psychology, reflecting about the changes that have occurred to date. Were analysed the educational offerings of 31 Portuguese higher education institutions, 12 public and 19 private, who teach the course of Psychology. The three degrees of study were considered, namely, bachelors, masters and doctoral. The analysis of the data focused on the curricular plans of the different degrees of studies in Psychology made available online by higher education institutions. Through them, we identified the curricular units with themes related to the teaching of Career Psychology. The results show the existence of 89 higher psychology courses in Portugal, distributed throughout the three degrees of studies. Concerning to the teaching of Career Psychology there were registered 49 curricular units with themes dedicated to this area of knowledge. There were identified 16 curricular units in the bachelor’s degree, 31 in master’s degree, and two in doctoral degree. It was observed a reduction in the number of degrees in Psychology in the last nine years in Portugal. We discuss the current situation of Psychology teaching, particularly the teaching of Career Psychology. The aim is to stimulate reflection about future perspectives of Psychology teaching, and specifically, specialized training in Psychology of Career, in Portugal.

Keywords: career psychology, higher education, psychology, Portugal

Procedia PDF Downloads 332
26638 Efficacy of Microbial Metabolites Obtained from Saccharomyces cerevisiae as Supplement for Quality Milk Production in Dairy Cows

Authors: Sajjad ur Rahman, Mariam Azam, Mukarram Bashir, Seemal Javaid, Aoun Muhammad, Muhammad Tahir, Jawad, Hannan Khan, Muhammad Zohaib

Abstract:

Partially fermented soya hulls and wheat bran through Saccharomyces cerevisiae (DL-22 S/N) substantiated as a natural source for quality milk production. Saccharomyces cerevisiae (DL-22 S/N) were grown under in-vivo conditions and processed through two-step fermentation with substrates. The extra pure metabolites (XPM) were dried and processed for maintaining 1mm mesh size particles for supplementation of pelleted feed. Two groups of a cow (Holstein Friesian) having 8 animals of similar age and lactation were given the experimental concentrates. Group A was fed daily with 12gm of XPM and 22% protein-pelleted feed, while Group B was provided with no metabolites in their feed. In thirty-nine days of trial, improvement in the overall health, body score, milk protein, milk fat, ash, and solid not fat (SNF), yield, and incidence rate of mastitis was observed. The collected data revealed an improvement in milk production of 2.02 liter/h/d. However, a reduction (3.75%) in the milk fats and an increase in the milk SNF was around 0.58%. The ash content ranged between 6.4-7.5%. The incidence of mastitis was reduced to less than 2%.

Keywords: microbial metabolites, Saccharomyces cerevisiae, milk production, fermentation, post-biotic metabolites, immunity

Procedia PDF Downloads 82
26637 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining

Authors: Hina Kausher, Sangita Srivastava

Abstract:

In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.

Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments

Procedia PDF Downloads 130
26636 Design of Black-Seed Pulp biomass-Derived New Bio-Sorbent by Combining Methods of Mineral Acids and High-Temperature for Arsenic Removal

Authors: Mozhgan Mohammadi, Arezoo Ghadi

Abstract:

Arsenic is known as a potential threat to the environment. Therefore, the aim of this research is to assess the arsenic removal efficiency from an aqueous solution, with a new biosorbent composed of a black seed pulp (BSP). To treat BSP, the combination of two methods (i.e. treating with mineral acids and use at high temperature) was used and designed bio-sorbent called BSP-activated/carbonized. The BSP-activated and BSP-carbonized were also prepared using HCL and 400°C temperature, respectively, to compare the results of each three methods. Followed by, adsorption parameters such as pH, initial ion concentration, biosorbent dosage, contact time, and temperature were assessed. It was found that the combination method has provided higher adsorption capacity so that up to ~99% arsenic removal was observed with BSP-activated/carbonized at pH of 7.0 and 40°C. The adsorption capacity for BSP-carbonized and BSP-activated were 87.92% (pH: 7, 60°C) and 78.50% (pH: 6, 90°C), respectively. Moreover, adsorption kinetics data indicated the best fit with the pseudo-second-order model. The maximum biosorption capacity, by the Langmuir isotherm model, was also recorded for BSP-activated/carbonized (53.47 mg/g). It is notable that arsenic adsorption on studied bio sorbents takes place as spontaneous and through chemisorption along with the endothermic nature of the biosorption process and reduction of random collision in the solid-liquid phase.

Keywords: black seed pulp, bio-sorbents, treatment of sorbents, adsorption isotherms

Procedia PDF Downloads 90
26635 The Robot Physician's (Rp - 7) Management and Care in Unstable ICU Oncology Patients

Authors: Alisher Agzamov, Hanan Al Harbi

Abstract:

BACKGROUND: The timely assessment and treatment of ICU Surgical and Medical Oncology patients is important for Oncology surgeons and Medical Oncologists and Intensivists. We hypothesized that the use of Robot Physician’s (RP - 7) ICU management and care in ICU can improve ICU physician rapid response to unstable ICU Oncology patients. METHODS: This is a prospective study using a before-after, cohort-control design to test the effectiveness of RP. We have used RP to make multidisciplinary ICU rounds in the ICU and for Emergency cases. Data concerning several aspects of the RP interaction including the latency of the response, the problem being treated, the intervention that was ordered, and the type of information gathered using the RP were documented. The effect of RP on ICU length of stay and cost was assessed. RESULTS: The use of RP was associated with a reduction in latency of attending physician face-to-face response for routine and urgent pages compared to conventional care (RP: 10.2 +/- 3.3 minutes vs conventional: 220 +/- 80 minutes). The response latencies to Oncology Emergency (8.0 +/- 2.8 vs 150 +/- 55 minutes) and for Respiratory Failure (12 +/- 04 vs 110 +/- 45 minutes) were reduced (P < .001), as was the LOS for patients with AML (5 days) and ARDS (10 day). There was an increase in ICU occupancy by 20 % compared with the prerobot era, and there was an ICU cost savings of KD2.5 million attributable to the use of RP. CONCLUSION: The use of RP enabled rapid face-to-face ICU Intensivist - physician response to unstable ICU Oncology patients and resulted in decreased ICU cost and LOS.

Keywords: robot physician, oncology patients, rp - 7 in icu management, cost and icu occupancy

Procedia PDF Downloads 77
26634 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 373
26633 A Portable Cognitive Tool for Engagement Level and Activity Identification

Authors: Terry Teo, Sun Woh Lye, Yufei Li, Zainuddin Zakaria

Abstract:

Wearable devices such as Electroencephalography (EEG) hold immense potential in the monitoring and assessment of a person’s task engagement. This is especially so in remote or online sites. Research into its use in measuring an individual's cognitive state while performing task activities is therefore expected to increase. Despite the growing number of EEG research into brain functioning activities of a person, key challenges remain in adopting EEG for real-time operations. These include limited portability, long preparation time, high number of channel dimensionality, intrusiveness, as well as level of accuracy in acquiring neurological data. This paper proposes an approach using a 4-6 EEG channels to determine the cognitive states of a subject when undertaking a set of passive and active monitoring tasks of a subject. Air traffic controller (ATC) dynamic-tasks are used as a proxy. The work found that when using the channel reduction and identifier algorithm, good trend adherence of 89.1% can be obtained between a commercially available BCI 14 channel Emotiv EPOC+ EEG headset and that of a carefully selected set of reduced 4-6 channels. The approach can also identify different levels of engagement activities ranging from general monitoring ad hoc and repeated active monitoring activities involving information search, extraction, and memory activities.

Keywords: assessment, neurophysiology, monitoring, EEG

Procedia PDF Downloads 72
26632 The Role of Tax Management Components in Creating Value or Increasing Risk of Tehran Stock Exchange Firms

Authors: Fereshteh Darash

Abstract:

Reflective tax management corresponds to the Agency Theory since it determines the motivation of managers for tax management actions and short-term and long-term consequences. Therefore, selection of tax strategy contributes to the tax and financial position of the firm in the future. The aim of the present research is to evaluate the effect of tax management components on risk-taking of firms listed in Tehran stock exchange by using regression analysis method. Results show that tax effective rate, tax risk and tax planning have no significant effect on the firm's future risk. Results suggest that stakeholders assess the effective tax rate and delay in tax payment in line with their benefits. They tend to accept the higher risk cost for reduction of tax payments and benefits of higher liquidity in current period. Hence, effective tax rate and tax risk have no significant effect on future risk of the firm. Moreover, tax planning yields no information regarding the predictability of the future profits and as a result, it has no significant effect on the future risk of the firm since specific goals of financial reporting are in priority for the stakeholders and regardless of the firm’s data analysis, they take investment decisions and they less intend to purchase the stocks in a rational manner.

Keywords: tax management, tax effective rate, tax risk, tax planning, firm risk

Procedia PDF Downloads 128
26631 Facility Data Model as Integration and Interoperability Platform

Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes

Abstract:

Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.

Keywords: airport ontology, energy management, facility data model, ontology modeling

Procedia PDF Downloads 444
26630 Role of Process Parameters on Pocket Milling with Abrasive Water Jet Machining Technique

Authors: T. V. K. Gupta, J. Ramkumar, Puneet Tandon, N. S. Vyas

Abstract:

Abrasive Water Jet Machining (AWJM) is an unconventional machining process well known for machining hard to cut materials. The primary research focus on the process was for through cutting and a very limited literature is available on pocket milling using AWJM. The present work is an attempt to use this process for milling applications considering a set of various process parameters. Four different input parameters, which were considered by researchers for part separation, are selected for the above application i.e. abrasive size, flow rate, standoff distance, and traverse speed. Pockets of definite size are machined to investigate surface roughness, material removal rate, and pocket depth. Based on the data available through experiments on SS304 material, it is observed that higher traverse speeds gives a better finish because of reduction in the particle energy density and lower depth is also observed. Increase in the standoff distance and abrasive flow rate reduces the rate of material removal as the jet loses its focus and occurrence of collisions within the particles. ANOVA for individual output parameter has been studied to know the significant process parameters.

Keywords: abrasive flow rate, surface finish, abrasive size, standoff distance, traverse speed

Procedia PDF Downloads 297
26629 Protective Effect of Levetiracetam on Aggravation of Memory Impairment in Temporal Lobe Epilepsy by Phenytoin

Authors: Asher John Mohan, Krishna K. L.

Abstract:

Objectives: (1) To assess the extent of memory impairment induced by Phenytoin (PHT) at normal and reduced dose on temporal lobe epileptic mice. (2) To evaluate the protective effect of Levetiracetam (LEV) on aggravation of memory impairment in temporal lobe epileptic mice by PHT. Materials and Methods: Albino mice of either sex (n=36) were used for the study for a period of 64 days. Convulsions were induced by intraperitoneal administration of pilocarpine 280 mg/kg on every 6th day. Radial arm maze (RAM) was employed to evaluate the memory impairment activity on every 7th day. The anticonvulsant and memory impairment activity were assessed in PHT normal and reduced doses both alone and in combination with LEV. RAM error scores and convulsive scores were the parameters considered for this study. Brain acetylcholine esterase and glutamate were determined along with histopathological studies of frontal cortex. Results: Administration of PHT for 64 days on mice has shown aggravation of memory impairment activity on temporal lobe epileptic mice. Although the reduction in PHT dose was found to decrease the degree of memory impairment the same decreased the anticonvulsant potency. The combination with LEV not only brought about the correction of impaired memory but also replaced the loss of potency due to the reduction of the dose of the antiepileptic drug employed. These findings were confirmed with enzyme and neurotransmitter levels in addition to histopathological studies. Conclusion: This study thus builds a foundation in combining a nootropic anticonvulsant with an antiepileptic drug to curb the adverse effect of memory impairment associated with temporal lobe epilepsy. However further extensive research is a must for the practical incorporation of this approach into disease therapy.

Keywords: anti-epileptic drug, Phenytoin, memory impairment, Pilocarpine

Procedia PDF Downloads 310
26628 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: Sahand Golmohammadi, Sana Hosseini Shirazi

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel

Procedia PDF Downloads 65
26627 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 100
26626 Assessing the Impact of Quinoa Cultivation Adopted to Produce a Secure Food Crop and Poverty Reduction by Farmers in Rural Pakistan

Authors: Ejaz Ashraf, Raheel Babar, Muhammad Yaseen, Hafiz Khurram Shurjeel, Nosheen Fatima

Abstract:

Main purpose of this study was to assess adoption level of farmers for quinoa cultivation after they had been taught through training and visit extension approach. At this time of the 21st century, population structure, climate change, food requirements and eating habits of people are changing rapidly. In this scenario, farmers must play their key role in sustainable crop development and production through adoption of new crops that may also be helpful to overcome the issue of food insecurity as well as reducing poverty in rural areas. Its cultivation in Pakistan is at the early stages and there is a need to raise awareness among farmers to grow quinoa crops. In the middle of the 2015, a training and visit extension approach was used to raise awareness and convince farmers to grow quinoa in the area. During training and visit extension program, 80 farmers were randomly selected for the training of quinoa cultivation. Later on, these farmers trained 60 more farmers living into their neighborhood. After six months, a survey was conducted with all 140 farmers to assess the impact of the training and visit program on adoption level of respondents for the quinoa crop. The survey instrument was developed with the help of literature review and other experts of the crop. Validity and reliability of the instrument were checked before complete data collection. The data were analyzed by using SPSS. Multiple regression analysis was used for interpretation of the results from the survey, which indicated that factors like information/ training, change in agronomic and plant protection practices play a key role in the adoption of quinoa cultivation by respondents. In addition, the model explains more than 50% of variation in the adoption level of respondents. It is concluded that farmers need timely information for improved knowledge of agronomic and plant protection practices to adopt cultivation of the quinoa crop in the area.

Keywords: farmers, quinoa, adoption, contact, training and visit

Procedia PDF Downloads 349
26625 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 266
26624 A Relational Data Base for Radiation Therapy

Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez

Abstract:

As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.

Keywords: information management system, radiation oncology, medical physics, free software

Procedia PDF Downloads 231
26623 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University

Authors: Komol Phaisarn, Natcha Wattanaprapa

Abstract:

This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.

Keywords: security, safety, storage devices, graduate students

Procedia PDF Downloads 347
26622 Assessing the Geothermal Parameters by Integrating Geophysical and Geospatial Techniques at Siwa Oasis, Western Desert, Egypt

Authors: Eman Ghoneim, Amr S. Fahil

Abstract:

Many regions in Egypt are facing a reduction in crop productivity due to environmental degradation. One factor of crop deterioration includes the unsustainable drainage of surface water, leading to salinized soil conditions. Egypt has exerted time and effort to identify solutions to mitigate the surface water drawdown problem and its resulting effects by exploring renewable and sustainable sources of energy. Siwa Oasis represents one of the most favorable regions in Egypt for geothermal exploitation since it hosts an evident cluster of superficial thermal springs. Some of these hot springs are characterized by high surface temperatures and bottom hole temperatures (BHT) ranging between 20°C to 40 °C and 21 °C to 121.7°C, respectively. The depth to the Precambrian basement rock is commonly greater than 440 m, ranging from 440 m to 4724.4 m. It is this feature that makes the locality of Siwa Oasis sufficient for industrial processes and geothermal power production. In this study, BHT data from 27 deep oil wells were processed by applying the widely used Horner and Gulf of Mexico correction methods to obtain formation temperatures. BHT, commonly used in geothermal studies, remains the most abundant and readily available data source for subsurface temperature information. Outcomes of the present work indicated a geothermal gradient ranging from 18 to 42 °C/km, a heat flow ranging from 24.7 to 111.3 m.W.k⁻¹, and a thermal conductivity of 1.3–2.65 W.m⁻¹.k⁻¹. Remote sensing thermal infrared, topographic, geologic, and geothermal data were utilized to provide geothermal potential maps for the Siwa Oasis. Important physiographic variables (including surface elevation, lineament density, drainage density), geological and geophysical parameters (including land surface temperature, depth to basement, bottom hole temperature, magnetic, geothermal gradient, heat flow, thermal conductivity, and main rock units) were incorporated into GIS to produce a geothermal potential map (GTP) for the Siwa Oasis region. The model revealed that both the northeastern and southeastern sections of the study region are of high geothermal potential. The present work showed that combining bottom-hole temperature measurements and remote sensing data with the selected geospatial methodologies is a useful tool for geothermal prospecting in geologically and tectonically comparable settings in Egypt and East Africa. This work has implications for identifying sustainable resources needed to support food production and renewable energy resources.

Keywords: BHT, geothermal potential map, geothermal gradient, heat flow, thermal conductivity, satellite imagery, GIS

Procedia PDF Downloads 106
26621 Depolymerization of Lignin in Sugarcane Bagasse by Hydrothermal Liquefaction to Optimize Catechol Formation

Authors: Nirmala Deenadayalu, Kwanele B. Mazibuko, Lethiwe D. Mthembu

Abstract:

Sugarcane bagasse is the residue obtained after the extraction of sugar from the sugarcane. The main aim of this work was to produce catechol from sugarcane bagasse. The optimization of catechol production was investigated using a Box-Behnken design of experiments. The sugarcane bagasse was heated in a Parr reactor at a set temperature. The reactions were carried out at different temperatures (100-250) °C, catalyst loading (1% -10% KOH (m/v)) and reaction times (60 – 240 min) at 17 bar pressure. The solid and liquid fractions were then separated by vacuum filtration. The liquid fraction was analyzed for catechol using high-pressure liquid chromatography (HPLC) and characterized for the functional groups using Fourier transform infrared spectroscopy (FTIR). The optimized condition for catechol production was 175 oC, 240 min, and 10 % KOH with a catechol yield of 79.11 ppm. Since the maximum time was 240 min and 10 % KOH, a further series of experiments were conducted at 175 oC, 260 min, and 20 % KOH and yielded 2.46 ppm catechol, which was a large reduction in catechol produced. The HPLC peak for catechol was obtained at 2.5 min for the standards and the samples. The FTIR peak at 1750 cm⁻¹ was due to the C=C vibration band of the aromatic ring in the catechol present for both the standard and the samples. The peak at 3325 cm⁻¹ was due to the hydrogen-bonded phenolic OH vibration bands for the catechol. The ANOVA analysis was also performed on the set of experimental data to obtain the factors that most affected the amount of catechol produced.

Keywords: catechol, sugarcane bagasse, lignin, hydrothermal liquefaction

Procedia PDF Downloads 92
26620 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment

Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah

Abstract:

Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.

Keywords: response time, query, consistency, bandwidth, storage capacity, CERN

Procedia PDF Downloads 265
26619 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 139
26618 Prompt Design for Code Generation in Data Analysis Using Large Language Models

Authors: Lu Song Ma Li Zhi

Abstract:

With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.

Keywords: large language models, prompt design, data analysis, code generation

Procedia PDF Downloads 21
26617 Graphene-reinforced Metal-organic Framework Derived Cobalt Sulfide/Carbon Nanocomposites as Efficient Multifunctional Electrocatalysts

Authors: Yongde Xia, Laicong Deng, Zhuxian Yang

Abstract:

Developing cost-effective electrocatalysts for oxygen reduction reaction (ORR), oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) is vital in energy conversion and storage applications. Herein, we report a simple method for the synthesis of graphene-reinforced cobalt sulfide/carbon nanocomposites and the evaluation of their electrocatalytic performance for typical electrocatalytic reactions. Nanocomposites of cobalt sulfide embedded in N, S co-doped porous carbon and graphene (CoS@C/Graphene) were generated via simultaneous sulfurization and carbonization of one-pot synthesized graphite oxide-ZIF-67 precursors. The obtained CoS@C/Graphene nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Thermogravimetric analysis-Mass spectroscopy, Scanning electronic microscopy, Transmission electronic microscopy, X-ray photoelectron spectroscopy and gas sorption. It was found that cobalt sulfide nanoparticles were homogenously dispersed in the in-situ formed N, S co-doped porous carbon/Graphene matrix. The CoS@C/10Graphene composite not only shows excellent electrocatalytic activity toward ORR with high onset potential of 0.89 V, four-electron pathway and superior durability of maintaining 98% current after continuously running for around 5 hours, but also exhibits good performance for OER and HER, due to the improved electrical conductivity, increased catalytic active sites and connectivity between the electrocatalytic active cobalt sulfide and the carbon matrix. This work offers a new approach for the development of novel multifunctional nanocomposites for the next generation of energy conversion and storage applications.

Keywords: MOF derivative, graphene, electrocatalyst, oxygen reduction reaction, oxygen evolution reaction, hydrogen evolution reaction

Procedia PDF Downloads 46
26616 Evaluation and Control of Cracking for Bending Rein-forced One-way Concrete Voided Slab with Plastic Hollow Inserts

Authors: Mindaugas Zavalis

Abstract:

Analysis of experimental tests data of bending one-way reinforced concrete slabs from various articles of science revealed that voided slabs with a grid of hollow plastic inserts inside have smaller mechani-cal and physical parameters compared to continuous cross-section slabs (solid slabs). The negative influence of a reinforced concrete slab is impacted by hollow plastic inserts, which make a grid of voids in the middle of the cross-sectional area of the reinforced concrete slab. A formed grid of voids reduces the slab’s stiffness, which influences the slab’s parameters of serviceability, like deflection and cracking. Prima-ry investigation of data established during experiments illustrates that cracks occur faster in the tensile surface of the voided slab under bend-ing compared to bending solid slab. It means that the crack bending moment force for the voided slab is smaller than the solid slab and the reduction can variate in the range of 14 – 40 %. Reduce of resistance to cracking can be controlled by changing a lot of factors: the shape of the plastic hallow insert, plastic insert height, steps between plastic in-serts, usage of prestressed reinforcement, the diameter of reinforcement bar, slab effective depth, the bottom cover thickness of concrete, effec-tive cross-section of the concrete area about reinforcement and etc. Mentioned parameters are used to evaluate crack width and step of cracking, but existing analytical calculation methods for cracking eval-uation of voided slab with plastic inserts are not so exact and the re-sults of cracking evaluation in this paper are higher than the results of analyzed experiments. Therefore, it was made analytically calculations according to experimental bending tests of voided reinforced concrete slabs with hollow plastic inserts to find and propose corrections for the evaluation of cracking for reinforced concrete voided slabs with hollow plastic inserts.

Keywords: voided slab, cracking, hallow plastic insert, bending, one-way reinforced concrete, serviceability

Procedia PDF Downloads 63