Search results for: delays resulting from two separate causes at the same time
3375 Sports Preference Intervention as a Predictor of Sustainable Participation at Risk Teenagers in Ibadan Metropolis, Ibadan Nigerian
Authors: Felix Olajide Ibikunle
Abstract:
Introductory Statement: Sustainable participation of teenagers in sports requires deliberate and concerted plans and managerial policy rooted in the “philosophy of catch them young.” At risk, teenagers need proper integration into societal aspiration: This direction will go a long way to streamline them into security breaches and attractive nuisance free lifestyles. Basic Methodology: The population consists of children between 13-19 years old. A proportionate sampling size technique of 60% was adopted to select seven zones out of 11 geo-political zones in the Ibadan metropolis. Qualitative information and interview were used to collect needed information. The majority of the teenagers were out of school, street hawkers, motor pack touts and unserious vocation apprentices. These groups have the potential for security breaches in the metropolis and beyond. Five hundred and thirty-four (534) respondents were used for the study. They were drawn from Ojoo, Akingbile and Moniya axis = 72; Agbowo, Ajibode and Apete axis = 74; Akobo, Basorun and Idi-ape axis 79; Wofun, Monatan and Iyana-Church axis = 78; Molete, Oke-ado and Oke-Bola axis = 75; Beere, Odinjo, Elekuro axis = 77; Eleyele, Ologuneru and Alesinloye axis = 79. Major Findings: Multiple regression was used to analyze the independent variables and percentages. The respondents' average age was 15.6 years old, and 100% were male. The instrument (questionnaire) used yielded; sport preference (r = 0.72), intervention (r = 0.68), and sustainable participation (r = 0.70). The relative contributions of sport preference on the participation of at risk teenagers was (F-ratio = 1.067); Intervention contribution of sport on the participation of at risk teenagers = produced (F-ratio of 12.095) was significant while, sustainable participation of at risk teenagers produced (F-ratio = 1.062) was significant. Closing Statement: The respondents’ sport preference stimulated their participation in sports. The intervention exposed at risk-teenagers to coaching, which activated their interest and participation in sports. At the same time, sustainable participation contributed positively to evolving at risk teenagers' participation in their preferred sport.Keywords: sport, preference, intervention, teenagers, sustainable, participation and risk teenagers
Procedia PDF Downloads 793374 RPM-Synchronous Non-Circular Grinding: An Approach to Enhance Efficiency in Grinding of Non-Circular Workpieces
Authors: Matthias Steffan, Franz Haas
Abstract:
The production process grinding is one of the latest steps in a value-added manufacturing chain. Within this step, workpiece geometry and surface roughness are determined. Up to this process stage, considerable costs and energy have already been spent on components. According to the current state of the art, therefore, large safety reserves are calculated in order to guarantee a process capability. Especially for non-circular grinding, this fact leads to considerable losses of process efficiency. With present technology, various non-circular geometries on a workpiece must be grinded subsequently in an oscillating process where X- and Q-axis of the machine are coupled. With the approach of RPM-Synchronous Noncircular Grinding, such workpieces can be machined in an ordinary plung grinding process. Therefore, the workpieces and the grinding wheels revolutionary rate are in a fixed ratio. A non-circular grinding wheel is used to transfer its geometry onto the workpiece. The authors use a worldwide unique machine tool that was especially designed for this technology. Highest revolution rates on the workpiece spindle (up to 4500 rpm) are mandatory for the success of this grinding process. This grinding approach is performed in a two-step process. For roughing, a highly porous vitrified bonded grinding wheel with medium grain size is used. It ensures high specific material removal rates for efficiently producing the non-circular geometry on the workpiece. This process step is adapted by a force control algorithm, which uses acquired data from a three-component force sensor located in the dead centre of the tailstock. For finishing, a grinding wheel with a fine grain size is used. Roughing and finishing are performed consecutively among the same clamping of the workpiece with two locally separated grinding spindles. The approach of RPM-Synchronous Noncircular Grinding shows great efficiency enhancement in non-circular grinding. For the first time, three-dimensional non-circular shapes can be grinded that opens up various fields of application. Especially automotive industries show big interest in the emerging trend in finishing machining.Keywords: efficiency enhancement, finishing machining, non-circular grinding, rpm-synchronous grinding
Procedia PDF Downloads 2833373 Efficacy of Topical Ectoin Therapy for Acute Radiodermatitis Associated with Breast Cancer Radiotherapy: A Randomized Controlled Study
Authors: Nagwa E. Abd Elazim, Maha S. El-naggar, Rania H. Mohamed, Sara M. Awad
Abstract:
Background: Radiodermatitis is a common side effect of radiation therapy for breast cancer. However, there is no current consensus about effective standard therapy for the prevention and management of radiation dermatitis. Topical ectoine has demonstrated efficacy in the treatment of atopic dermatitis owing to its anti-inflammatory activity. Objective: To evaluate the efficacy of topical ectoine in comparison to traditional topical dexpanthenol treatment in the management of acute radiodermatitis in breast cancer patients undergoing adjuvant radiotherapy. Methods: Fifty patients were randomized to use either dexpanthenol 0.5% cream (25 patients), or ectoin 7% cream (25 patients), applied twice daily to the irradiated area during the radiation period and continued for 2 weeks after cessation of radiotherapy. Assessment of radiation skin toxicity using Common Terminology Criteria of Adverse Events (CTCAE) v4.0, radiation-associated symptoms, and adverse events were undertaken weekly during radiotherapy and 2 weeks after the end of radiotherapy. Results: Topical ectoine showed some clinical benefit over dexpanthenol, as shown by delayed time to onset (at week 3 versus week 2, respectively) and larger number of patients who reached grade 0 at the end of treatment (64% vs. 48%, respectively). The clinical symptoms of pain (p = 0.003) and itching (p = 0.001) attributable to radiation were less pronounced with ectoine than with dexpanthenol. Burning and hyperpigmentation were the most common side effects with ectoine. However, no significant difference between dexpanthenol and ectoine treatments was found in any of the side effects (p = 0.1). Conclusion: Ectoin was overall more effective in improving radiation dermatitis than topical dexpanthenol in breast cancer patients. Ectoin could be proposed as a preventive or curative treatment for patients undergoing postoperative irradiation for breast cancer. Further clinical studies with a larger number of patients are recommended for the confirmation of these preliminary results.Keywords: breast cancer, dexapanthenol, ectoin, radiation dermatitis
Procedia PDF Downloads 1313372 Analysis of the Role of Population Ageing on Crosstown Roads' Traffic Accidents Using Latent Class Clustering
Authors: N. Casado-Sanz, B. Guirao
Abstract:
The population aged 65 and over is projected to double in the coming decades. Due to this increase, driver population is expected to grow and in the near future, all countries will be faced with population aging of varying intensity and in unique time frames. This is the greatest challenge facing industrialized nations and due to this fact, the study of the relationships of dependency between population aging and road safety is becoming increasingly relevant. Although the deterioration of driving skills in the elderly has been analyzed in depth, to our knowledge few research studies have focused on the road infrastructure and the mobility of this particular group of users. In Spain, crosstown roads have one of the highest fatality rates. These rural routes have a higher percentage of elderly people who are more dependent on driving due to the absence or limitations of urban public transportation. Analysing road safety in these routes is very complex because of the variety of the features, the dispersion of the data and the complete lack of related literature. The objective of this paper is to identify key factors that cause traffic accidents. The individuals under study were the accidents with killed or seriously injured in Spanish crosstown roads during the period 2006-2015. Latent cluster analysis was applied as a preliminary tool for segmentation of accidents, considering population aging as the main input among other socioeconomic indicators. Subsequently, a linear regression analysis was carried out to estimate the degree of dependence between the accident rate and the variables that define each group. The results show that segmenting the data is very interesting and provides further information. Additionally, the results revealed the clear influence of the aging variable in the clusters obtained. Other variables related to infrastructure and mobility levels, such as the crosstown roads layout and the traffic intensity aimed to be one of the key factors in the causality of road accidents.Keywords: cluster analysis, population ageing, rural roads, road safety
Procedia PDF Downloads 1113371 An Exploration of German Tourists’ Market Demand Towards Ethiopian Tourist Destinations
Authors: Dagnew Dessie Mengie
Abstract:
The purpose of this study was to investigate German tourists' demand for Ethiopian tourism destinations. The author has made every effort to identify the differences in the preferences of German visitors’ demand in Ethiopia comparing with Egypt, Kenya, Tanzania, and South African tourism sectors if they are invited to visit at the same time. However, the demand for international tourism for Ethiopia currently lags behind these African countries. Therefore, to offer demand-driven tourism products, the Ethiopian government and tour and travel operators need to understand the important factors that affect international tourists’ decision to visit Ethiopian tourist destinations. The aim of this study was to analyze German Tourists’ Demand for Ethiopian destinations. The researcher aimed to identify the demand for German tourists’ preference for Ethiopian tourist destinations compared to the above-mentioned African countries. For collecting and analysing data for this study, both quantitative and qualitative methods of research are being used in this study. The most significant data are collected by using the primary data collection method i.e. survey and interviews which are the most and large number of potential responses and feedback from nine German active tourists,12 Ethiopian tourism officials, four African embassies, and four well functioning private tour companies and secondary data collected from books, journals, previous research and electronic websites. Based on the data analysis of the information gathered from interviews and questionnaires, the study disclosed that the majority of German tourists do have not that high demand for Ethiopian Tourist destinations due to the following reasons: (1) Many Germans are fascinated by adventures and safari and simply want to lie on the beach and relax. These interests have leaded them to look for other African countries which have these accesses. (2) Uncomfortable infrastructure and transport problems are attributed to the decreasing number of German tourists in the country. (3) Inadequate marketing operation of the Ethiopian Tourism Authority and its delegates in advertising and clarifying the above irregularities which are raised by the tourists.Keywords: environmental benefits of tourism, social benefits of tourism, economic benefits of tourism, political factors on tourism
Procedia PDF Downloads 403370 Thermodynamic Analysis and Experimental Study of Agricultural Waste Plasma Processing
Authors: V. E. Messerle, A. B. Ustimenko, O. A. Lavrichshev
Abstract:
A large amount of manure and its irrational use negatively affect the environment. As compared with biomass fermentation, plasma processing of manure enhances makes it possible to intensify the process of obtaining fuel gas, which consists mainly of synthesis gas (CO + H₂), and increase plant productivity by 150–200 times. This is achieved due to the high temperature in the plasma reactor and a multiple reduction in waste processing time. This paper examines the plasma processing of biomass using the example of dried mixed animal manure (dung with a moisture content of 30%). Characteristic composition of dung, wt.%: Н₂О – 30, С – 29.07, Н – 4.06, О – 32.08, S – 0.26, N – 1.22, P₂O₅ – 0.61, K₂O – 1.47, СаО – 0.86, MgO – 0.37. The thermodynamic code TERRA was used to numerically analyze dung plasma gasification and pyrolysis. Plasma gasification and pyrolysis of dung were analyzed in the temperature range 300–3,000 K and pressure 0.1 MPa for the following thermodynamic systems: 100% dung + 25% air (plasma gasification) and 100% dung + 25% nitrogen (plasma pyrolysis). Calculations were conducted to determine the composition of the gas phase, the degree of carbon gasification, and the specific energy consumption of the processes. At an optimum temperature of 1,500 K, which provides both complete gasification of dung carbon and the maximum yield of combustible components (99.4 vol.% during dung gasification and 99.5 vol.% during pyrolysis), and decomposition of toxic compounds of furan, dioxin, and benz(a)pyrene, the following composition of combustible gas was obtained, vol.%: СО – 29.6, Н₂ – 35.6, СО₂ – 5.7, N₂ – 10.6, H₂O – 17.9 (gasification) and СО – 30.2, Н₂ – 38.3, СО₂ – 4.1, N₂ – 13.3, H₂O – 13.6 (pyrolysis). The specific energy consumption of gasification and pyrolysis of dung at 1,500 K is 1.28 and 1.33 kWh/kg, respectively. An installation with a DC plasma torch with a rated power of 100 kW and a plasma reactor with a dung capacity of 50 kg/h was used for dung processing experiments. The dung was gasified in an air (or nitrogen during pyrolysis) plasma jet, which provided a mass-average temperature in the reactor volume of at least 1,600 K. The organic part of the dung was gasified, and the inorganic part of the waste was melted. For pyrolysis and gasification of dung, the specific energy consumption was 1.5 kWh/kg and 1.4 kWh/kg, respectively. The maximum temperature in the reactor reached 1,887 K. At the outlet of the reactor, a gas of the following composition was obtained, vol.%: СO – 25.9, H₂ – 32.9, СO₂ – 3.5, N₂ – 37.3 (pyrolysis in nitrogen plasma); СO – 32.6, H₂ – 24.1, СO₂ – 5.7, N₂ – 35.8 (air plasma gasification). The specific heat of combustion of the combustible gas formed during pyrolysis and plasma-air gasification of agricultural waste is 10,500 and 10,340 kJ/kg, respectively. Comparison of the integral indicators of dung plasma processing showed satisfactory agreement between the calculation and experiment.Keywords: agricultural waste, experiment, plasma gasification, thermodynamic calculation
Procedia PDF Downloads 403369 Liquid Nitrogen as Fracturing Method for Hot Dry Rocks in Kazakhstan
Authors: Sotirios Longinos, Anna Loskutova, Assel Tolegenova, Assem Imanzhussip, Lei Wang
Abstract:
Hot, dry rock (HDR) has substantial potential as a thermal energy source. It has been exploited by hydraulic fracturing to extract heat and generate electricity, which is a well-developed technique known for creating the enhanced geothermal systems (EGS). These days, LN2 is being tested as an environmental friendly fracturing fluid to generate densely interconnected crevices to augment heat exchange efficiency and production. This study examines experimentally the efficacy of LN2 cryogenic fracturing for granite samples in Kazakhstan with immersion method. A comparison of two different experimental models is carried out. The first mode is rock heating along with liquid nitrogen treatment (heating with freezing time), and the second mode is multiple times of heating along with liquid nitrogen treatment (heating with LN2 freezing-thawing cycles). The experimental results indicated that with multiple heating and LN2-treatment cycles, the permeability of granite first ameliorates with increasing number of cycles and later reaches a plateau after a certain number of cycles. On the other hand, density, P-wave velocity, uniaxial compressive strength, elastic modulus, and tensile strength indicate a downward trend with increasing heating and treatment cycles. The thermal treatment cycles do not seem to have an obvious effect on the Poisson’s ratio. The changing rate of granite rock properties decreases as the number of cycles increases. The deterioration of granite primarily happens within the early few cycles. The heating temperature during the cycles shows an important influence on the deterioration of granite. More specifically, mechanical deterioration and permeability amelioration become more remarkable as the heating temperature increases.LN2 fracturing generates many positives compared to conventional fracturing methods such as little water consumption, requirement of zero chemical additives, lessening of reservoir damage, and so forth. Based on the experimental observations, LN2 can work as a promising waterless fracturing fluid to stimulate hot, dry rock reservoirs.Keywords: granite, hydraulic fracturing, liquid nitrogen, Kazakhstan
Procedia PDF Downloads 1623368 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System
Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple
Abstract:
This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation
Procedia PDF Downloads 1043367 Determination of the Volatile Organic Compounds, Antioxidant and Antimicrobial Properties of Microwave-Assisted Green Extracted Ficus Carica Linn Leaves
Authors: Pelin Yilmaz, Gizemnur Yildiz Uysal, Elcin Demirhan, Belma Ozbek
Abstract:
The edible fig plant, Ficus carica Linn, belongs to the Moraceae family, and the leaves are mainly considered agricultural waste after harvesting. It has been demonstrated in the literature that fig leaves contain appealing properties such as high vitamins, fiber, amino acids, organic acids, and phenolic or flavonoid content. The extraction of these valuable products has gained importance. Microwave-assisted extraction (MAE) is a method using microwave energy to heat the solvents, thereby transferring the bioactive compounds from the sample to the solvent. The main advantage of the MAE is the rapid extraction of bioactive compounds. In the present study, the MAE was applied to extract the bioactive compounds from Ficus carica L. leaves, and the effect of microwave power (180-900 W), extraction time (60-180 s), and solvent to sample amount (mL/g) (10-30) on the antioxidant property of the leaves. Then, the volatile organic component profile was determined at the specified extraction point. Additionally, antimicrobial studies were carried out to determine the minimum inhibitory concentration of the microwave-extracted leaves. As a result, according to the data obtained from the experimental studies, the highest antimicrobial properties were obtained under the process parameters such as 540 W, 180 s, and 20 mL/g concentration. The volatile organic compound profile showed that isobergapten, which belongs to the furanocoumarins family exhibiting anticancer, antioxidant, and antimicrobial activity besides promoting bone health, was the main compound. Acknowledgments: This work has been supported by Yildiz Technical University Scientific Research Projects Coordination Unit under project number FBA-2021-4409. The authors would like to acknowledge the financial support from Tubitak 1515 - Frontier R&D Laboratory Support Programme.Keywords: Ficus carica Linn leaves, volatile organic component, GC-MS, microwave extraction, isobergapten, antimicrobial
Procedia PDF Downloads 803366 System Identification of Building Structures with Continuous Modeling
Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab
Abstract:
This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction
Procedia PDF Downloads 2333365 Fatal Road Accident Causer's Driving Aptitude in Hungary
Authors: A. Juhász, M. Fogarasi
Abstract:
Those causing fatal traffic accidents are traumatized, which negatively influences their cognitive functions and their personality. In order to clarify how much the trauma of causing a fatal accident effects their driving skills and personality traits, the results of a psychological aptitude and a personality test of drivers carelessly causing fatal accidents and of drivers not causing any accidents were compared separately. The sample (N = 354) consists of randomly selected drivers from the Transportation Aptitude and Examination Centre database who caused fatal accidents (Fatal group, n = 177) or did not cause accidents (Control group, n = 177). The aptitude tests were taken between 2014 and 2019. The comparison of the 2 groups was done according to 3 aspects: 1. Categories of aptitude (suitable, restricted, unsuited); 2. Categories of causes (ability, personality, ability and personality) within the restricted or unsuited (altogether: non-suitable subgroups); 3. Categories of ability and personality within the non-suitable subgroups regardless of the cause-category. Within ability deficiency, the two groups include those, whose ability factor is impaired or limited. This is also true in case of personality failure. Compared to the control group, the number of restricted drivers causing fatal accidents is significantly higher (p < .000) and the number of unsuited drivers is higher on a tendency-level (p = .06). Compared to the control group in the fatal non-suitable subgroup, the ratio of restricted suitability and the unsuitability due to ability factors is exclusively significantly lower (p < .000). The restricted suitability and the unsuitability due to personality factors are more significant in the fatal non-suitable subgroup (p < .000). Incapacity due to combination of ability and personality is also significantly higher in the fatal group (p = .002). Compared to the control group both ability and personality factors are also significantly higher in the fatal non-suitable subgroup (p < .000). Overall, the control group is more eligible for driving than drivers who have caused fatalities. The ability and personality factors are significantly higher in the case of fatal accident causers who are non-suitable for driving. Moreover the concomitance of ability and personality factors occur almost exclusively to drivers who caused fatal accidents. Further investigation is needed to understand the causes and how the aptitude test results for the fatal group could improve over time.Keywords: aptitude, unsuited, fatal accident, ability, personality
Procedia PDF Downloads 1433364 Carbon-Based Electrodes for Parabens Detection
Authors: Aniela Pop, Ianina Birsan, Corina Orha, Rodica Pode, Florica Manea
Abstract:
Carbon nanofiber-epoxy composite electrode has been investigated through voltammetric and amperometric techniques in order to detect parabens from aqueous solutions. The occurrence into environment as emerging pollutants of these preservative compounds has been extensively studied in the last decades, and consequently, a rapid and reliable method for their quantitative quantification is required. In this study, methylparaben (MP) and propylparaben (PP) were chosen as representatives for paraben class. The individual electrochemical detection of each paraben has been successfully performed. Their electrochemical oxidation occurred at the same potential value. Their simultaneous quantification should be assessed electrochemically only as general index of paraben class as a cumulative signal corresponding to both MP and PP from solution. The influence of pH on the electrochemical signal was studied. pH ranged between 1.3 and 9.0 allowed shifting the detection potential value to smaller value, which is very desired for the electroanalysis. Also, the signal is better-defined and higher sensitivity is achieved. Differential-pulsed voltammetry and square-wave voltammetry were exploited under the optimum pH conditions to improve the electroanalytical performance for the paraben detection. Also, the operation conditions were selected, i.e., the step potential, modulation amplitude and the frequency. Chronomaprometry application as the easiest electrochemical detection method led to worse sensitivity, probably due to a possible fouling effect of the electrode surface. The best electroanalytical performance was achieved by pulsed voltammetric technique but the selection of the electrochemical technique is related to the concrete practical application. A good reproducibility of the voltammetric-based method using carbon nanofiber-epoxy composite electrode was determined and no interference effect was found for the cation and anion species that are common in the water matrix. Besides these characteristics, the long life-time of the electrode give to carbon nanofiber-epoxy composite electrode a great potential for practical applications.Keywords: carbon nanofiber-epoxy composite electrode, electroanalysis, methylparaben, propylparaben
Procedia PDF Downloads 2253363 Characterisation of Chitooligomers Prepared with the Aid of Cellulase, Xylanase and Chitosanase
Authors: Anna Zimoch-Korzycka, Dominika Kulig, Andrzej Jarmoluk
Abstract:
The aim of this study was to obtain chitooligosaccharides from chitosan with better functional properties using three different enzyme preparations and compare the products of enzymatic hydrolysis. Commercially available cellulase (CL), xylanase (X) and chitosanase (CS) preparations were used to investigate hydrolytic activity on chitosan (CH) with low molecular weight and DD of 75-85%. It has been reported that CL and X have side activities of other enzymes, such as β-glucanase or β-glucosidase. CS enzyme has a foreign activity of chitinase. Each preparation was used in 1000 U of activity and in the same reaction conditions. The degree of deacetylation and molecular weight of chitosan were specified using titration and viscometric methods, respectively. The hydrolytic activity of enzymes preparations on chitosan was monitored by dynamic viscosity measurement. After 4 h reaction with stirring, solutions were filtered and chitosan oligomers were isolated by methanol solution into two fractions: precipitate (A) and supernatant (B). A Fourier-transform infrared spectroscopy was used to characterize the structural changes of chitosan oligomers fractions and initial chitosan. Furthermore, the solubility of lyophilized hydrolytic mixture (C) and two chitooligomers fractions (A, B) of each enzyme hydrolysis was assayed. The antioxidant activity of chitosan oligomers was evaluated as DPPH free radical scavenging activity. The dynamic viscosity measured after addition of enzymes preparation to the chitosan solution decreased dramatically over time in the sample with X in comparison to solution without the enzyme. For mixtures with CL and CS, lower viscosities were also recorded but not as low as the ones with X. A and B fractions were characterized by the most similar viscosity obtained by the xylanase hydrolysis and were 15 mPas and 9 mPas, respectively. Structural changes of chitosan oligomers A, B, C and their differences related with various enzyme preparations used were confirmed. Water solubility of A fractions was not possible to filter and the result was not recorded. Solubility of supernatants was approximately 95% and was higher than hydrolytic mixture. It was observed that the DPPH radical scavenging effect of A, B, C samples is the highest for X products and was approximately 13, 17, 19% respectively. In summary, a mixture of chitooligomers may be useful for the design of edible protective coatings due to the improved biophysical properties.Keywords: cellulase, xylanase, chitosanase, chitosan, chitooligosaccharides
Procedia PDF Downloads 3263362 Feasibility Study on the Application of Waste Materials for Production of Sustainable Asphalt Mixtures
Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman
Abstract:
Road networks are expanding all over the world during the past few decades to meet the increasing freight volumes created by the population growth and industrial development. At the same time, the rate of generation of solid wastes in the society is increasing with the population growth, technological development, and changes in the lifestyle of people. Thus, the management of solid wastes has become an acute problem. Accordingly, there is a need for greater efficiency in the construction and maintenance of road networks, in reducing the overall cost, especially the utilization of natural materials such as aggregates. An efficient means to reduce construction and maintenance costs of road networks is to replace natural (virgin) materials by secondary, recycled materials. Recycling will also help to reduce pressure on landfills and demand for extraction of natural virgin materials thus ensuring sustainability. Application of solid wastes in asphalt layer reduces not only environmental issues associated with waste disposal but also the demand for virgin materials which will subsequently result in sustainability. Therefore, this research aims to investigate the feasibility of the application of some of the waste materials such as glass, construction and demolition wastes, etc. as alternative materials in pavement construction, particularly flexible pavements. To this end, various combination of different waste materials in certain percentages is considered in designing the asphalt mixture. One of the goals of this research is to determine the optimum percentage of all these materials in the mixture. This is done through a series of tests to evaluate the volumetric properties and resilient modulus of the mixture. The information and data collected from these tests are used to select the adequate samples for further assessment through advanced tests such as triaxial dynamic test and fatigue test, in order to investigate the asphalt mixture resistance to permanent deformation and also cracking. This paper presents the results of these investigations on the application of waste materials in asphalt mixture for production of a sustainable asphalt mix.Keywords: asphalt, glass, pavement, recycled aggregate, sustainability
Procedia PDF Downloads 2363361 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products
Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto
Abstract:
An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.Keywords: TDLAS, carbon dioxide, cups, headspace, measurement
Procedia PDF Downloads 3243360 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds
Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia
Abstract:
Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is actual for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during a presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminium unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, chloride in alkaline environment at 80-90ºC tempertures. β-Al(OH)3 has been received from aluminium powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70 – 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, at the time in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). During the work the following devices have been used: X-ray difractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.Keywords: α-Alumina, combustion, phase transformation, seeding
Procedia PDF Downloads 3933359 Comparison of the Postoperative Analgesic Effects of Morphine, Paracetamol, and Ketorolac in Patient-Controlled Analgesia in the Patients Undergoing Open Cholecystectomy
Authors: Siamak Yaghoubi, Vahideh Rashtchi, Marzieh Khezri, Hamid Kayalha, Monadi Hamidfar
Abstract:
Background and objectives: Effective postoperative pain management in abdominal surgeries, which are painful procedures, plays an important role in reducing postoperative complications and increasing patient’s satisfaction. There are many techniques for pain control, one of which is Patient-Controlled Analgesia (PCA). The aim of this study was to compare the analgesic effects of morphine, paracetamol and ketorolac in the patients undergoing open cholecystectomy, using PCA method. Material and Methods: This randomized controlled trial was performed on 330 ASA (American Society of Anesthesiology) I-II patients ( three equal groups, n=110) who were scheduled for elective open cholecystectomy in Shahid Rjaee hospital of Qazvin, Iran from August 2013 until September 2015. All patients were managed by general anesthesia with TIVA (Total Intra Venous Anesthesia) technique. The control group received morphine with maximum dose of 0.02mg/kg/h, the paracetamol group received paracetamol with maximum dose of 1mg/kg/h, and the ketorolac group received ketorolac with maximum daily dose of 60mg using IV-PCA method. The parameters of pain, nausea, hemodynamic variables (BP and HR), pruritus, arterial oxygen desaturation, patient’s satisfaction and pain score were measured every two hours for 8 hours following operation in all groups. Results: There were no significant differences in demographic data between the three groups. there was a statistically significant difference with regard to the mean pain score at all times between morphine and paracetamol, morphine and ketorolac, and paracetamol and ketorolac groups (P<0.001). Results indicated a reduction with time in the mean level of postoperative pain in all three groups. At all times the mean level of pain in ketorolac group was less than that in the other two groups (p<0.001). Conclusion: According to the results of this study ketorolac is more effective than morphine and paracetamol in postoperative pain control in the patients undergoing open cholecystectomy, using PCA method.Keywords: analgesia, cholecystectomy, ketorolac, morphine, paracetamol
Procedia PDF Downloads 1973358 A Systematic Review of the Predictors, Mediators and Moderators of the Uncanny Valley Effect in Human-Embodied Conversational Agent Interaction
Authors: Stefanache Stefania, Ioana R. Podina
Abstract:
Background: Embodied Conversational Agents (ECAs) are revolutionizing education and healthcare by offering cost-effective, adaptable, and portable solutions. Research on the Uncanny Valley effect (UVE) involves various embodied agents, including ECAs. Achieving the optimal level of anthropomorphism, no consensus on how to overcome the uncanniness problem. Objectives: This systematic review aims to identify the user characteristics, agent features, and context factors that influence the UVE. Additionally, this review provides recommendations for creating effective ECAs and conducting proper experimental studies. Methods: We conducted a systematic review following the PRISMA 2020 guidelines. We included quantitative, peer-reviewed studies that examined human-ECA interaction. We identified 17,122 relevant records from ACM Digital Library, IEE Explore, Scopus, ProQuest, and Web of Science. The quality of the predictors, mediators, and moderators adheres to the guidelines set by prior systematic reviews. Results: Based on the included studies, it can be concluded that females and younger people perceive the ECA as more attractive. However, inconsistent findings exist in the literature. ECAs characterized by extraversion, emotional stability, and agreeableness are considered more attractive. Facial expressions also play a role in the UVE, with some studies indicating that ECAs with more facial expressions are considered more attractive, although this effect is not consistent across all studies. Few studies have explored contextual factors, but they are nonetheless crucial. The interaction scenario and exposure time are important circumstances in human-ECA interaction. Conclusions: The findings highlight a growing interest in ECAs, which have seen significant developments in recent years. Given this evolving landscape, investigating the risk of the UVE can be a promising line of research.Keywords: human-computer interaction, uncanny valley effect, embodied conversational agent, systematic review
Procedia PDF Downloads 813357 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics
Authors: Titus A. Beu
Abstract:
Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.
Procedia PDF Downloads 1203356 Makerspaces as Centers of Innovation: An Assessment of the Impact of Technology Incubation Centers in Nigeria
Authors: Bisi Olawoyin
Abstract:
The idea of knowledge sharing facilitated by the internet and complemented by a collaborative offline process in form of shared workshops called Makerspaces has become an attractive economic development agenda worldwide. Towards this end, Nigeria has established a number of Technology Incubation Centers (TICs) across the country with a view to using them as institutional mechanisms for commercializing Research and Development results; thus helping to promote venture creation and economic development. This study thus examines the impact of the nurturing by the TICs, on the performance of selected incubated enterprises that have grown into medium scale businesses in different sectors of the economy. The objective is to determine the extent to which the process of incubation has contributed to their growth in relation to similar businesses that developed outside the TICs. Six enterprises nurtured by TICs and six others outside, these were selected for the study. Data were collected in respect of the twelve enterprises covering their first five years of operation. Performances in terms of annual turnover, market share, and product range were analysed by scatter diagram plotted to show these variables against time and on comparative basis between TIC and non-TIC enterprises. Results showed an initial decline in performance for most of the incubatees in the first two years due to sluggish adjustment to withdrawal of subsidies enjoyed at the TICs. However, four of them were able to catch up with improved performance and surpass their non–TIC counterparts consistently from the third year. Analysis of year on year performance also showed average growth rate of 7% and 5 % respectively for TIC and non–TIC enterprises. The study, therefore, concludes that TICs have great role to play in nurturing new, innovative businesses but sees the need for government to address the provision of critical facilities especially electricity and utilities that constitute critical cost components for businesses. It must also address the issue of investment grants, loans including the development of technology/industrial parks that will serve to boost business survival.Keywords: entrepreneurship, incubation, innovation, makerspaces
Procedia PDF Downloads 2213355 Relocation of Livestocks in Rural of Canakkale Province Using Remote Sensing and GIS
Authors: Melis Inalpulat, Tugce Civelek, Unal Kizil, Levent Genc
Abstract:
Livestock production is one of the most important components of rural economy. Due to the urban expansion, rural areas close to expanding cities transform into urban districts during the time. However, the legislations have some restrictions related to livestock farming in such administrative units since they tend to create environmental concerns like odor problems resulted from excessive manure production. Therefore, the existing animal operations should be moved from the settlement areas. This paper was focused on determination of suitable lands for livestock production in Canakkale province of Turkey using remote sensing (RS) data and GIS techniques. To achieve the goal, Formosat 2 and Landsat 8 imageries, Aster DEM, and 1:25000 scaled soil maps, village boundaries, and village livestock inventory records were used. The study was conducted using suitability analysis which evaluates the land in terms of limitations and potentials, and suitability range was categorized as Suitable (S) and Non-Suitable (NS). Limitations included the distances from main and crossroads, water resources and settlements, while potentials were appropriate values for slope, land use capability and land use land cover status. Village-based S land distribution results were presented, and compared with livestock inventories. Results showed that approximately 44230 ha area is inappropriate because of the distance limitations for roads and etc. (NS). Moreover, according to LULC map, 71052 ha area consists of forests, olive and other orchards, and thus, may not be suitable for building such structures (NS). In comparison, it was found that there are a total of 1228 ha S lands within study area. The village-based findings indicated that, in some villages livestock production continues on NS areas. Finally, it was suggested that organized livestock zones may be constructed to serve in more than one village after the detailed analysis complemented considering also political decisions, opinion of the local people, etc.Keywords: GIS, livestock, LULC, remote sensing, suitable lands
Procedia PDF Downloads 2983354 Spatiotemporal Variability in Rainfall Trends over Sinai Peninsula Using Nonparametric Methods and Discrete Wavelet Transforms
Authors: Mosaad Khadr
Abstract:
Knowledge of the temporal and spatial variability of rainfall trends has been of great concern for efficient water resource planning, management. In this study annual, seasonal and monthly rainfall trends over the Sinai Peninsula were analyzed by using absolute homogeneity tests, nonparametric Mann–Kendall (MK) test and Sen’s slope estimator methods. The homogeneity of rainfall time-series was examined using four absolute homogeneity tests namely, the Pettitt test, standard normal homogeneity test, Buishand range test, and von Neumann ratio test. Further, the sequential change in the trend of annual and seasonal rainfalls is conducted using sequential MK (SQMK) method. Then the trend analysis based on discrete wavelet transform technique (DWT) in conjunction with SQMK method is performed. The spatial patterns of the detected rainfall trends were investigated using a geostatistical and deterministic spatial interpolation technique. The results achieved from the Mann–Kendall test to the data series (using the 5% significance level) highlighted that rainfall was generally decreasing in January, February, March, November, December, wet season, and annual rainfall. A significant decreasing trend in the winter and annual rainfall with significant levels were inferred based on the Mann-Kendall rank statistics and linear trend. Further, the discrete wavelet transform (DWT) analysis reveal that in general, intra- and inter-annual events (up to 4 years) are more influential in affecting the observed trends. The nature of the trend captured by both methods is similar for all of the cases. On the basis of spatial trend analysis, significant rainfall decreases were also noted in the investigated stations. Overall, significant downward trends in winter and annual rainfall over the Sinai Peninsula was observed during the study period.Keywords: trend analysis, rainfall, Mann–Kendall test, discrete wavelet transform, Sinai Peninsula
Procedia PDF Downloads 1703353 A Study on How to Influence Players Interactive Behavior of Victory or Defeat in Party Games
Authors: Shih-Chieh Liao, Cheng-Yan Shuai
Abstract:
"Party game" is a game mode that enables players to maintain a good social and interactive experience. The common game modes include Teamwork, Team competitive, Independent competitive, Battle Royale. Party games are defined as a game with easy rules, easy to play, quickly spice up a party, and support four to six players. It also needs to let the player feel satisfied no matter victory or defeat. However, players may feel negative or angry when the game is imbalanced, especially when they play with teammates. Some players care about winning or losing, and they will blame it on the game mechanics. What is more serious is that the player will cause the argument, which is unnecessary. These behaviors that trigger quarrels and negative emotions often originate from the player's determination of the victory and the ratio of victory during the competition. In view of this, our research invited a group of subjects to the experiment, which is going to inspect player’s emotions by Electromyography (EMG) and Electrodermal Activity (EDA) when they are playing party games with others. When a player wins or loses, the negative and positive feeling will be recorded from the game beginning to the end. At the same time, physiologic and emotional reactions are also being recorded in each part of the game. The game will be designed as telling the interaction when players are in the quest of a party game. The experiment content includes the emotional changes affected by the physiological values of game victory and defeat between “player against friend” and “player against stranger.” Through this experiment, the balance between winners and losers lies in the basis of good game interaction and game interaction in the game and explore the emotional positive and negative effects caused by the result of the party game. The result shows that “player against friend” has a significant negative emotion and significant positive emotion at “player against stranger.” According to the result, the player's experience will be affected with winning rate or form when they play the party game. We suggest the developer balance the game with our experiment method to let players get a better experience.Keywords: party games, biofeedback, emotional responses, user experience, game design
Procedia PDF Downloads 1633352 Study of Surface Water Quality in the Wadi El Harrach for Its Use in the Artificial Groundwater Recharge of the Mitidja, North Algeria
Authors: M. Meddi, A. Boufekane
Abstract:
The Mitidja coastal groundwater which extends over an area of 1450 km2 is a strategic resource in the Algiers region. The high dependence of the regional economy on the use of this groundwater forces us to have recourse to its artificial recharge from the Wadi El Harrach in its upstream part. This system of artificial recharge has shown its effectiveness in the development of water resource mentioned in the succeeding works in several regions of the world. The objective of this study is to: Increase the reserves of water inputs by infiltration, raise the water level and its good quality in wells and boreholes, reduce losses to the sea, and address seawater intrusion by maintaining balance in the freshwater-saltwater interface in the downstream part of the groundwater basin. After analyzing the situation, it was noticed that a qualitative monitoring of the Wadi water for the groundwater recharge has to be done. For this purpose, we proceeded during three successive years (2010, 2011, and 2012) to the monthly sampling of water in the upstream part of the Wadi El Harrach for chemical analysis. The variation of the sediment transport concentration will be also measured. This monitoring aims to characterize the water quality and avoid clogging in the proposed recharge area. The results of these analyses showed the good chemical quality according to the analyses we performed in the laboratory during the three years, but they are too loaded with suspended matters. We noticed that these fine particles come from the grinding of limestone of sandpit located upstream of the area of the proposed recharge system. This problem can be solved by a water supply upstream of sandpit. For the recharge, we propose the method of using two wells for dual use, which means that it can be used for water supply and extraction. This solution is inexpensive in our case and could easily be used as wells are already drilled in the upstream part. This solution increases over time the piezometric level and also reduce groundwater contamination by saltwater in the downstream part.Keywords: water quality, artificial groundwater recharge, Mitidja, North Algeria
Procedia PDF Downloads 2873351 Variations in Heat and Cold Waves over Southern India
Authors: Amit G. Dhorde
Abstract:
It is now well established that the global surface air temperatures have increased significantly during the period that followed the industrial revolution. One of the main predictions of climate change is that the occurrences of extreme weather events will increase in future. In many regions of the world, high-temperature extremes have already started occurring with rising frequency. The main objective of the present study is to understand spatial and temporal changes in days with heat and cold wave conditions over southern India. The study area includes the region of India that lies to the south of Tropic of Cancer. To fulfill the objective, daily maximum and minimum temperature data for 80 stations were collected for the period 1969-2006 from National Data Center of India Meteorological Department. After assessing the homogeneity of data, 62 stations were finally selected for the study. Heat and cold waves were classified as slight, moderate and severe based on the criteria given by Indias' meteorological department. For every year, numbers of days experiencing heat and cold wave conditions were computed. This data was analyzed with linear regression to find any existing trend. Further, the time period was divided into four decades to investigate the decadal frequency of the occurrence of heat and cold waves. The results revealed that the average annual temperature over southern India shows an increasing trend, which signifies warming over this area. Further, slight cold waves during winter season have been decreasing at the majority of the stations. The moderate cold waves also show a similar pattern at the majority of the stations. This is an indication of warming winters over the region. Besides this analysis, other extreme indices were also analyzed such as extremely hot days, hot days, very cold nights, cold nights, etc. This analysis revealed that nights are becoming warmer and days are getting warmer over some regions too.Keywords: heat wave, cold wave, southern India, decadal frequency
Procedia PDF Downloads 1283350 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing
Authors: Huan Ting Liao
Abstract:
In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning
Procedia PDF Downloads 243349 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1273348 Sustainability Analysis and Quality Assessment of Rainwater Harvested from Green Roofs: A Review
Authors: Mst. Nilufa Sultana, Shatirah Akib, Muhammad Aqeel Ashraf, Mohamed Roseli Zainal Abidin
Abstract:
Most people today are aware that global Climate change, is not just a scientific theory but also a fact with worldwide consequences. Global climate change is due to rapid urbanization, industrialization, high population growth and current vulnerability of the climatic condition. Water is becoming scarce as a result of global climate change. To mitigate the problem arising due to global climate change and its drought effect, harvesting rainwater from green roofs, an environmentally-friendly and versatile technology, is becoming one of the best assessment criteria and gaining attention in Malaysia. This paper addresses the sustainability of green roofs and examines the quality of water harvested from green roofs in comparison to rainwater. The factors that affect the quality of such water, taking into account, for example, roofing materials, climatic conditions, the frequency of rainfall frequency and the first flush. A green roof was installed on the Humid Tropic Centre (HTC) is a place of the study on monitoring program for urban Stormwater Management Manual for Malaysia (MSMA), Eco-Hydrological Project in Kualalumpur, and the rainwater was harvested and evaluated on the basis of four parameters i.e., conductivity, dissolved oxygen (DO), pH and temperature. These parameters were found to fall between Class I and Class III of the Interim National Water Quality Standards (INWQS) and the Water Quality Index (WQI). Some preliminary treatment such as disinfection and filtration could likely to improve the value of these parameters to class I. This review paper clearly indicates that there is a need for more research to address other microbiological and chemical quality parameters to ensure that the harvested water is suitable for use potable water for domestic purposes. The change in all physical, chemical and microbiological parameters with respect to storage time will be a major focus of future studies in this field.Keywords: Green roofs, INWQS, MSMA-SME, rainwater harvesting, water treatment, water quality parameter, WQI
Procedia PDF Downloads 5333347 Rate of Force Development, Net Impulse and Modified Reactive Strength as Predictors of Volleyball Spike Jump Height among Young Elite Players
Authors: Javad Sarvestan, Zdenek Svoboda
Abstract:
Force-time (F-T) curvature characteristics are globally referenced as the main indicators of athletic jump performance. Nevertheless, to the best of authors’ knowledge, no investigation tried to deeply study the relationship between F-T curve variables and real-game jump performance among elite volleyball players. To this end, this study was designated to investigate the association between F-T curve variables, including movement timings, force, velocity, power, rate of force development (RFD), modified reactive strength index (RSImod), and net impulse with spike jump height during real-game circumstances. Twelve young elite volleyball players performed 3 countermovement jump (CMJ) and 3 spike jump in real-game circumstances with 1-minute rest intervals to prevent fatigue. Shapiro-Wilk statistical test illustrated the normality of data distribution, and Pearson’s product correlation test portrayed a significant correlation between CMJ height and peak RFD (0.85), average RFD (r=0.81), RSImod (r=0.88) and concentric net impulse (r=0.98), and also significant correlation between spike jump height and peak RFD (0.73), average RFD (r=0.80), RSImod (r=0.62) and concentric net impulse (r=0.71). Multiple regression analysis also reported that these factors have a strong contribution in predicting of CMJ (98%) and spike jump (77%) heights. Outcomes of this study confirm that the RFD, concentric net impulse, and RSImod values could precisely monitor and track the volleyball attackers’ explosive strength, muscular stretch-shortening cycle function efficiency, and ultimate spike jump height. To this effect, volleyball coaches and trainers are advised to have an in-depth focus on their athletes’ progression or the impacts of strength trainings by observing and chasing the F-T curve variables such as RFD, net impulse, and RSImod.Keywords: net impulse, reactive strength index, rate of force development, stretch-shortening cycle
Procedia PDF Downloads 1353346 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 198