Search results for: microscopic model
1153 Tiebout and Crime: How Crime Affect the Income Tax Capacity
Authors: Nik Smits, Stijn Goeminne
Abstract:
Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.Keywords: crime, local taxes, migration, Tiebout mobility
Procedia PDF Downloads 3091152 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 621151 Computational Fluid Dynamics Design and Analysis of Aerodynamic Drag Reduction Devices for a Mazda T3500 Truck
Authors: Basil Nkosilathi Dube, Wilson R. Nyemba, Panashe Mandevu
Abstract:
In highway driving, over 50 percent of the power produced by the engine is used to overcome aerodynamic drag, which is a force that opposes a body’s motion through the air. Aerodynamic drag and thus fuel consumption increase rapidly at speeds above 90kph. It is desirable to minimize fuel consumption. Aerodynamic drag reduction in highway driving is the best approach to minimize fuel consumption and to reduce the negative impacts of greenhouse gas emissions on the natural environment. Fuel economy is the ultimate concern of automotive development. This study aims to design and analyze drag-reducing devices for a Mazda T3500 truck, namely, the cab roof and rear (trailer tail) fairings. The aerodynamic effects of adding these append devices were subsequently investigated. To accomplish this, two 3D CAD models of the Mazda truck were designed using the Design Modeler. One, with these, append devices and the other without. The models were exported to ANSYS Fluent for computational fluid dynamics analysis, no wind tunnel tests were performed. A fine mesh with more than 10 million cells was applied in the discretization of the models. The realizable k-ε turbulence model with enhanced wall treatment was used to solve the Reynold’s Averaged Navier-Stokes (RANS) equation. In order to simulate the highway driving conditions, the tests were simulated with a speed of 100 km/h. The effects of these devices were also investigated for low-speed driving. The drag coefficients for both models were obtained from the numerical calculations. By adding the cab roof and rear (trailer tail) fairings, the simulations show a significant reduction in aerodynamic drag at a higher speed. The results show that the greatest drag reduction is obtained when both devices are used. Visuals from post-processing show that the rear fairing minimized the low-pressure region at the rear of the trailer when moving at highway speed. The rear fairing achieved this by streamlining the turbulent airflow, thereby delaying airflow separation. For lower speeds, there were no significant differences in drag coefficients for both models (original and modified). The results show that these devices can be adopted for improving the aerodynamic efficiency of the Mazda T3500 truck at highway speeds.Keywords: aerodynamic drag, computation fluid dynamics, fluent, fuel consumption
Procedia PDF Downloads 1411150 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 3911149 Using GIS and AHP Model to Explore the Parking Problem in Khomeinishahr
Authors: Davood Vatankhah, Reza Mokhtari Malekabadi, Mohsen Saghaei
Abstract:
Function of urban transportation systems depends on the existence of the required infrastructures, appropriate placement of different components, and the cooperation of these components with each other. Establishing various neighboring parking spaces in city neighborhood in order to prevent long-term and inappropriate parking of cars in the allies is one of the most effective operations in reducing the crowding and density of the neighborhoods. Every place with a certain application attracts a number of daily travels which happen throughout the city. A large percentage of the people visiting these places go to these travels by their own cars; therefore, they need a space to park their cars. The amount of this need depends on the usage function and travel demand of the place. The study aims at investigating the spatial distribution of the public parking spaces, determining the effective factors in locating, and their combination in GIS environment in Khomeinishahr of Isfahan city. Ultimately, the study intends to create an appropriate pattern for locating parking spaces, determining the request for parking spaces of the traffic areas, choosing the proper places for providing the required public parking spaces, and also proposing new spots in order to promote quality and quantity aspects of the city in terms of enjoying public parking spaces. Regarding the method, the study is based on applied purpose and regarding nature, it is analytic-descriptive. The population of the study includes people of the center of Khomeinishahr which is located on Northwest of Isfahan having about 5000 hectares of geographic area and the population of 241318 people are in the center of Komeinishahr. In order to determine the sample size, Cochran formula was used and according to the population of 26483 people of the studied area, 231 questionnaires were used. Data analysis was carried out by usage of SPSS software and after estimating the required space for parking spaces, initially, the effective criteria in locating the public parking spaces are weighted by the usage of Analytic Hierarchical Process in the Arc GIS software. Then, appropriate places for establishing parking spaces were determined by fuzzy method of Order Weighted Average (OWA). The results indicated that locating of parking spaces in Khomeinishahr have not been carried out appropriately and per capita of the parking spaces is not desirable in relation to the population and request; therefore, in addition to the present parking lots, 1434 parking lots are needed in the area of the study for each day; therefore, there is not a logical proportion between parking request and the number of parking lots in Khomeinishahr.Keywords: GIS, locating, parking, khomeinishahr
Procedia PDF Downloads 3111148 Modelling High Strain Rate Tear Open Behavior of a Bilaminate Consisting of Foam and Plastic Skin Considering Tensile Failure and Compression
Authors: Laura Pytel, Georg Baumann, Gregor Gstrein, Corina Klug
Abstract:
Premium cars often coat the instrument panels with a bilaminate consisting of a soft foam and a plastic skin. The coating is torn open during the passenger airbag deployment under high strain rates. Characterizing and simulating the top coat layer is crucial for predicting the attenuation that delays the airbag deployment, effecting the design of the restrain system and to reduce the demand of simulation adjustments through expensive physical component testing.Up to now, bilaminates used within cars either have been modelled by using a two-dimensional shell formulation for the whole coating system as one which misses out the interaction of the two layers or by combining a three-dimensional formulation foam layer with a two-dimensional skin layer but omitting the foam in the significant parts like the expected tear line area and the hinge where high compression is expected. In both cases, the properties of the coating causing the attenuation are not considered. Further, at present, the availability of material information, as there are failure dependencies of the two layers, as well as the strain rate of up to 200 1/s, are insufficient. The velocity of the passenger airbag flap during an airbag shot has been measured with about 11.5 m/s during first ripping; the digital image correlation evaluation showed resulting strain rates of above 1500 1/s. This paper provides a high strain rate material characterization of a bilaminate consisting of a thin polypropylene foam and a thermoplasctic olefins (TPO) skin and the creation of validated material models. With the help of a Split Hopkinson tension bar, strain rates of 1500 1/s were within reach. The experimental data was used to calibrate and validate a more physical modelling approach of the forced ripping of the bilaminate. In the presented model, the three-dimensional foam layer is continuously tied to the two-dimensional skin layer, allowing failure in both layers at any possible position. The simulation results show a higher agreement in terms of the trajectory of the flaps and its velocity during ripping. The resulting attenuation of the airbag deployment measured by the contact force between airbag and flaps increases and serves usable data for dimensioning modules of an airbag system.Keywords: bilaminate ripping behavior, High strain rate material characterization and modelling, induced material failure, TPO and foam
Procedia PDF Downloads 721147 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach
Authors: Alvaro Figueira, Bruno Cabral
Abstract:
Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.Keywords: data mining, e-learning, grade prediction, machine learning, student learning path
Procedia PDF Downloads 1241146 Winkler Springs for Embedded Beams Subjected to S-Waves
Authors: Franco Primo Soffietti, Diego Fernando Turello, Federico Pinto
Abstract:
Shear waves that propagate through the ground impose deformations that must be taken into account in the design and assessment of buried longitudinal structures such as tunnels, pipelines, and piles. Conventional engineering approaches for seismic evaluation often rely on a Euler-Bernoulli beam models supported by a Winkler foundation. This approach, however, falls short in capturing the distortions induced when the structure is subjected to shear waves. To overcome these limitations, in the present work an analytical solution is proposed considering a Timoshenko beam and including transverse and rotational springs. The present research proposes ground springs derived as closed-form analytical solutions of the equations of elasticity including the seismic wavelength. These proposed springs extend the applicability of previous plane-strain models. By considering variations in displacements along the longitudinal direction, the presented approach ensures the springs do not approach zero at low frequencies. This characteristic makes them suitable for assessing pseudo-static cases, which typically govern structural forces in kinematic interaction analyses. The results obtained, validated against existing literature and a 3D Finite Element model, reveal several key insights: i) the cutoff frequency significantly influences transverse and rotational springs; ii) neglecting displacement variations along the structure axis (i.e., assuming plane-strain deformation) results in unrealistically low transverse springs, particularly for wavelengths shorter than the structure length; iii) disregarding lateral displacement components in rotational springs and neglecting variations along the structure axis leads to inaccurately low spring values, misrepresenting interaction phenomena; iv) transverse springs exhibit a notable drop in resonance frequency, followed by increasing damping as frequency rises; v) rotational springs show minor frequency-dependent variations, with radiation damping occurring beyond resonance frequencies, starting from negative values. This comprehensive analysis sheds light on the complex behavior of embedded longitudinal structures when subjected to shear waves and provides valuable insights for the seismic assessment.Keywords: shear waves, Timoshenko beams, Winkler springs, sol-structure interaction
Procedia PDF Downloads 631145 Postfeminism, Femvertising and Inclusion: An Analysis of Changing Women's Representation in Contemporary Media
Authors: Saveria Capecchi
Abstract:
In this paper, the results of qualitative content research on postfeminist female representation in contemporary Western media (advertising, television series, films, social media) are presented. Female role models spectacularized in media culture are an important part of the development of social identities and could inspire new generations. Postfeminist cultural texts have given rise to heated debate between gender and media studies scholars. There are those who claim they are commercial products seeking to sell feminism to women, a feminism whose political and subversive role is completely distorted and linked to the commercial interests of the cosmetics, fashion, fitness and cosmetic surgery industries, in which women’s ‘power’ lies mainly in their power to seduce. There are those who consider them feminist manifestos because they represent independent ‘modern women’ free from male control who aspire to achieve professionally and overcome gender stereotypes like that of the ‘housewife-mother’. Major findings of the research show that feminist principles have been gradually absorbed by the cultural industry and adapted to its commercial needs, resulting in the dissemination of contradictory values. On the one hand, in line with feminist arguments, patriarchal ideology is condemned and the concepts of equality and equal opportunity between men and women are promoted. On the other hand, feminist principles and demands are ascribed to individualism, which translates into the slogan: women are free to decide for themselves, even to objectify their own bodies. In particular, it is observed that femvertising trend in media industry is changing female representation moving away from classic stereotypes: the feminine beauty ideal of slenderness, emphasized in the media since the seventies, is ultimately challenged by the ‘curvy’ body model, which is considered to be more inclusive and based on the concept of ‘natural beauty’. Another aspect of change is the ‘anti-romantic’ revolution performed by some heroines, who are not in search of Prince Charming, in television drama and in the film industry. In conclusion, although femvertising tends to simplify and trivialize the concepts characterizing fourth-wave feminism (‘intersectionality’ and ‘inclusion’), it is also a tendency that enables the challenging of media imagery largely based on male viewpoints, interests and desires.Keywords: feminine beauty ideal, femvertising, gender and media, postfeminism
Procedia PDF Downloads 1541144 Health State Utility Values Related to COVID-19 Pandemic Using EQ-5D: A Systematic Review and Meta-Analysis
Authors: Xu Feifei
Abstract:
The prevalence of COVID-19 currently is the biggest challenge to improving people's quality of life. Its impact on the health-related quality of life (HRQoL) is highly uncertain and has not been summarized so far. The aim of the present systematic review was to assess and provide an up-to-date analysis of the impact of the COVID-19 pandemic on the HRQoL of participants who have been infected, have not been infected but isolated, frontline, with different diseases, and the general population. Therefore, an electronic search of the literature in PubMed databases was performed from 2019 to July 2022 (without date restriction). PRISMA guideline methodology was employed, and data regarding the HRQoL were extracted from eligible studies. Articles were included if they met the following inclusion criteria: (a) reports on the data collection of the health state utility values (HSUVs) related to COVID-19 from 2019 to 2021; (b) English language and peer-reviewed journals; and (c) original HSUV data; (d) using EQ-5D tool to quantify the HRQoL. To identify studies that reported the effects on COVID-19, data on the proportion of overall HSUVs of participants who had the outcome were collected and analyzed using a one-group meta-analysis. As a result, thirty-two studies fulfilled the inclusion criteria and, therefore, were included in the systematic review. A total of 45295 participants and provided 219 means of HSUVs during COVID-19 were included in this systematic review. The range of utility is from 0.224 to 1. The study included participants from Europe (n=16), North America (n=4), Asia (n=10), South America (n=1), and Africa (n=1). Twelve articles showed that the HRQoL of the participants who have been infected with COVID-19 (range of overall HSUVs from 0.6125 to 0.863). Two studies reported the population of frontline workers (the range of overall HSUVs from 0.82 to 0.93). Seven of the articles researched the participants who had not been infected with COVID-19 but suffered from morbidities during the pandemic (range of overall HSUVs from 0.5 to 0.96). Thirteen studies showed that the HRQoL of the respondents who have not been infected with COVID-19 and without any morbidities (range of overall HSUVs from 0.64 to 0.964). Moreover, eighteen articles reported the outcomes of overall HSUVs during the COVID-19 pandemic in different population groups. The estimate of overall HSUVs of direct COVID-19 experience population (n=1333) was 0.751 (95% CI 0.670 - 0.832, I2 = 98.64%); the estimate of frontline population (n=610) was 0.906 ((95% CI 0.854 – 0.957, I2 = 98.61%); participants with different disease (n=132) were 0.768 (95% CI 0.515 - 1.021, I2= 99.26%); general population without infection history (n=29,892) was 0.825 (95% CI 0.766 - 0.885, I2 =99.69%). Conclusively, taking into account these results, this systematic review might confirm that COVID-19 has a negative impact on the HRQoL of the infected population and illness population. It provides practical value for cost-effectiveness model analysis of health states related to COVID-19.Keywords: COVID-19, health-related quality of life, meta-analysis, systematic review, utility value
Procedia PDF Downloads 831143 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 411142 The Effect of Swirl on the Flow Distribution in Automotive Exhaust Catalysts
Authors: Piotr J. Skusiewicz, Johnathan Saul, Ijhar Rusli, Svetlana Aleksandrova, Stephen. F. Benjamin, Miroslaw Gall, Steve Pierson, Carol A. Roberts
Abstract:
The application of turbocharging in automotive engines leads to swirling flow entering the catalyst. The behaviour of this type of flow within the catalyst has yet to be adequately documented. This work discusses the effect of swirling flow on the flow distribution in automotive exhaust catalysts. Compressed air supplied to a moving-block swirl generator allowed for swirling flow with variable intensities to be generated. Swirl intensities were measured at the swirl generator outlet using single-sensor hot-wire probes. The swirling flow was fed into diffusers with total angles of 10°, 30° and 180°. Downstream of the diffusers, a wash-coated diesel oxidation catalyst (DOC) of length 143.8 mm, diameter 76.2 mm and nominal cell density of 400 cpsi was fitted. Velocity profiles were measured at the outlet sleeve about 30 mm downstream of the monolith outlet using single-sensor hot-wire probes. Wall static pressure was recorded using a multi-tube manometer connected to pressure taps positioned along the diffuser walls. The results show that as swirl is increased, more of the flow is directed towards the diffuser walls. The velocity decreases around the centre-line and maximum velocities are observed close to the outer radius of the monolith for all flow rates. At the maximum swirl intensity, reversed flow was recorded near the centre of the monolith. Wall static pressure measurements in the 180° diffuser indicated no pressure recovery as the flow enters the diffuser. This is indicative of flow separation at the inlet to the diffuser. To gain insight into the flow structure, CFD simulations have been performed for the 180° diffuser for a flow rate of 63 g/s. The geometry of the model consists of the complete assembly from the upstream swirl generator to the outlet sleeve. Modelling of the flow in the monolith was achieved using the porous medium approach, where the monolith with parallel flow channels is modelled as a porous medium that resists the flow. A reasonably good agreement was achieved between the experimental and CFD results downstream of the monolith. The CFD simulations allowed visualisation of the separation zones and central toroidal recirculation zones that occur within the expansion region at certain swirl intensities which are highlighted.Keywords: catalyst, computational fluid dynamics, diffuser, hot-wire anemometry, swirling flow
Procedia PDF Downloads 3041141 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose
Authors: Mariamawit T. Belete
Abstract:
Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.Keywords: sorghum anthracnose, data mining, case based reasoning, integration
Procedia PDF Downloads 831140 The Applications and Effects of the Career Courses of Taiwanese College Students with LEGO® SERIOUS PLAY®
Authors: Payling Harn
Abstract:
LEGO® SERIOUS PLAY® is a kind of facilitated workshop of thinking and problem-solving approach. Participants built symbolic and metaphorical brick models in response to tasks given by the facilitator and presented these models to other participants. LEGO® SERIOUS PLAY® applied the positive psychological mechanism of Flow and positive emotions to help participants perceiving self-experience and unknown fact and increasing the happiness of life by building bricks and narrating story. At present, LEGO® SERIOUS PLAY® is often utilized for facilitating professional identity and strategy development to assist workers in career development. The researcher desires to apply LEGO® SERIOUS PLAY® to the career courses of college students in order to promote their career ability. This study aimed to use the facilitative method of LEGO® SERIOUS PLAY® to develop the career courses of college students, then explore the effects of Taiwanese college students' positive and negative emotions, career adaptabilities, and career sense of hope by LEGO® SERIOUS PLAY® career courses. The researcher regarded strength as the core concept and use the facilitative mode of LEGO® SERIOUS PLAY® to develop the 8 weeks’ career courses, which including ‘emotion of college life’ ‘career highlights’, ‘career strengths’, ‘professional identity’, ‘business model’, ‘career coping’, ‘strength guiding principles’, ‘career visions’,’ career hope’, etc. The researcher will adopt problem-oriented teaching method to give tasks which according to the weekly theme, use the facilitative mode of LEGO® SERIOUS PLAY® to guide participants to respond tasks by building bricks. Then participants will conduct group discussions, reports, and writing reflection journals weekly. Participants will be 24 second-grade college students. They will attend LEGO® SERIOUS PLAY® career courses for 2 hours a week. The researcher used’ ‘Career Adaptability Scale’ and ‘Career Hope Scale’ to conduct pre-test and post-test. The time points of implementation testing will be one week before courses starting, one day after courses ending respectively. Then the researcher will adopt repeated measures one-way ANOVA for analyzing data. The results revealed that the participants significantly presented immediate positive effect in career adaptability and career hope. The researcher hopes to construct the mode of LEGO® SERIOUS PLAY® career courses by this study and to make a substantial contribution to the future career teaching and researches of LEGO® SERIOUS PLAY®.Keywords: LEGO® SERIOUS PLAY®, career courses, strength, positive and negative affect, career hope
Procedia PDF Downloads 2541139 Walking Cadence to Attain a Minimum of Moderate Aerobic Intensity in People at Risk of Cardiovascular Diseases
Authors: Fagner O. Serrano, Danielle R. Bouchard, Todd A. Duhame
Abstract:
Walking cadence (steps/min) is an effective way to prescribe exercise so an individual can reach a moderate intensity, which is recommended to optimize health benefits. To our knowledge, there is no study on the required walking cadence to reach a moderate intensity for people that present chronic conditions or risk factors for chronic conditions such as Cardiovascular Diseases (CVD). The objectives of this study were: 1- to identify the walking cadence needed for people at risk of CVD to a reach moderate intensity, and 2- to develop and test an equation using clinical variables to help professionals working with individuals at risk of CVD to estimate the walking cadence needed to reach moderate intensity. Ninety-one people presenting a minimum of two risk factors for CVD completed a medically supervised graded exercise test to assess maximum oxygen consumption at the first visit. The last visit consisted of recording walking cadence using a foot pod Garmin FR-60 and a Polar heart rate monitor, aiming to get participants to reach 40% of their maximal oxygen consumption using a portable metabolic cart on an indoor flat surface. The equation to predict the walking cadence needed to reach moderate intensity in this sample was developed as follows: The sample was randomly split in half and the equation was developed with one half of the participants, and validated using the other half. Body mass index, height, stride length, leg height, body weight, fitness level (VO2max), and self-selected cadence (over 200 meters) were measured using objective measured. Mean walking cadence to reach moderate intensity for people age 64.3 ± 10.3 years old at risk of CVD was 115.8 10.3 steps per minute. Body mass index, height, body weight, fitness level, and self-selected cadence were associated with walking cadence at moderate intensity when evaluated in bivariate analyses (r ranging from 0.22 to 0.52; all P values ≤0.05). Using linear regression analysis including all clinical variables associated in the bivariate analyses, body weight was the significant predictor of walking cadence for reaching a moderate intensity (ß=0.24; P=.018) explaining 13% of walking cadence to reach moderate intensity. The regression model created was Y = 134.4-0.24 X body weight (kg).Our findings suggest that people presenting two or more risk factors for CVD are reaching moderate intensity while walking at a cadence above the one officially recommended (116 steps per minute vs. 100 steps per minute) for healthy adults.Keywords: cardiovascular disease, moderate intensity, older adults, walking cadence
Procedia PDF Downloads 4441138 Driving Environmental Quality through Fuel Subsidy Reform in Nigeria
Authors: O. E. Akinyemi, P. O. Alege, O. O. Ajayi, L. A. Amaghionyediwe, A. A. Ogundipe
Abstract:
Nigeria as an oil-producing developing country in Africa is one of the many countries that had been subsidizing consumption of fossil fuel. Despite the numerous advantage of this policy ranging from increased energy access, fostering economic and industrial development, protecting the poor households from oil price shocks, political considerations, among others; they have been found to impose economic cost, wasteful, inefficient, create price distortions discourage investment in the energy sector and contribute to environmental pollution. These negative consequences coupled with the fact that the policy had not been very successful at achieving some of its stated objectives, led to a number of organisations and countries such as the Group of 7 (G7), World Bank, International Monetary Fund (IMF), International Energy Agency (IEA), Organisation for Economic Co-operation and Development (OECD), among others call for global effort towards reforming fossil fuel subsidies. This call became necessary in view of seeking ways to harmonise certain existing policies which may by design hamper current effort at tackling environmental concerns such as climate change. This is in addition to driving a green growth strategy and low carbon development in achieving sustainable development. The energy sector is identified to play a vital role. This study thus investigates the prospects of using fuel subsidy reform as a viable tool in driving an economy that de-emphasizes carbon growth in Nigeria. The method used is the Johansen and Engle-Granger two-step Co-integration procedure in order to investigate the existence or otherwise of a long-run equilibrium relationship for the period 1971 to 2011. Its theoretical framework is rooted in the Environmental Kuznet Curve (EKC) hypothesis. In developing three case scenarios (case of subsidy payment, no subsidy payment and effective subsidy), findings from the study supported evidence of a long run sustainable equilibrium model. Also, estimation results reflected that the first and the second scenario do not significantly influence the indicator of environmental quality. The implication of this is that in reforming fuel subsidy to drive environmental quality for an economy like Nigeria, strong and effective regulatory framework (measure that was interacted with fuel subsidy to yield effective subsidy) is essential.Keywords: environmental quality, fuel subsidy, green growth, low carbon growth strategy
Procedia PDF Downloads 3281137 Supply Chain Design: Criteria Considered in Decision Making Process
Authors: Lenka Krsnakova, Petr Jirsak
Abstract:
Prior research on facility location in supply chain is mostly focused on improvement of mathematical models. It is due to the fact that supply chain design has been for the long time the area of operational research that underscores mainly quantitative criteria. Qualitative criteria are still highly neglected within the supply chain design research. Facility location in the supply chain has become multi-criteria decision-making problem rather than single criteria decision due to changes of market conditions. Thus, both qualitative and quantitative criteria have to be included in the decision making process. The aim of this study is to emphasize the importance of qualitative criteria as key parameters of relevant mathematical models. We examine which criteria are taken into consideration when Czech companies decide about their facility location. A literature review on criteria being used in facility location decision making process creates a theoretical background for the study. The data collection was conducted through questionnaire survey. Questionnaire was sent to manufacturing and business companies of all sizes (small, medium and large enterprises) with the representation in the Czech Republic within following sectors: automotive, toys, clothing industry, electronics and pharmaceutical industry. Comparison of which criteria prevail in the current research and which are considered important by companies in the Czech Republic is made. Despite the number of articles focused on supply chain design, only minority of them consider qualitative criteria and rarely process supply chain design as a multi-criteria decision making problem. Preliminary results of the questionnaire survey outlines that companies in the Czech Republic see the qualitative criteria and their impact on facility location decision as crucial. Qualitative criteria as company strategy, quality of working environment or future development expectations are confirmed to be considered by Czech companies. This study confirms that the qualitative criteria can significantly influence whether a particular location could or could not be right place for a logistic facility. The research has two major limitations: researchers who focus on improving of mathematical models mostly do not mention criteria that enter the model. Czech supply chain managers selected important criteria from the group of 18 available criteria and assign them importance weights. It does not necessarily mean that these criteria were taken into consideration when the last facility location was chosen, but how they perceive that today. Since the study confirmed the necessity of future research on how qualitative criteria influence decision making process about facility location, the authors have already started in-depth interviews with participating companies to reveal how the inclusion of qualitative criteria into decision making process about facility location influence the company´s performance.Keywords: criteria influencing facility location, Czech Republic, facility location decision-making, qualitative criteria
Procedia PDF Downloads 3291136 Effects of Bone Marrow Derived Mesenchymal Stem Cells (MSC) in Acute Respiratory Distress Syndrome (ARDS) Lung Remodeling
Authors: Diana Islam, Juan Fang, Vito Fanelli, Bing Han, Julie Khang, Jianfeng Wu, Arthur S. Slutsky, Haibo Zhang
Abstract:
Introduction: MSC delivery in preclinical models of ARDS has demonstrated significant improvements in lung function and recovery from acute injury. However, the role of MSC delivery in ARDS associated pulmonary fibrosis is not well understood. Some animal studies using bleomycin, asbestos, and silica-induced pulmonary fibrosis show that MSC delivery can suppress fibrosis. While other animal studies using radiation induced pulmonary fibrosis, liver, and kidney fibrosis models show that MSC delivery can contribute to fibrosis. Hypothesis: The beneficial and deleterious effects of MSC in ARDS are modulated by the lung microenvironment at the time of MSC delivery. Methods: To induce ARDS a two-hit mouse model of Hydrochloric acid (HCl) aspiration (day 0) and mechanical ventilation (MV) (day 2) was used. HCl and injurious MV generated fibrosis within 14-28 days. 0.5x106 mouse MSCs were delivered (via both intratracheal and intravenous routes) either in the active inflammatory phase (day 2) or during the remodeling phase (day 14) of ARDS (mouse fibroblasts or PBS used as a control). Lung injury accessed using inflammation score and elastance measurement. Pulmonary fibrosis was accessed using histological score, tissue collagen level, and collagen expression. In addition alveolar epithelial (E) and mesenchymal (M) marker expression profile was also measured. All measurements were taken at day 2, 14, and 28. Results: MSC delivery 2 days after HCl exacerbated lung injury and fibrosis compared to HCl alone, while the day 14 delivery showed protective effects. However in the absence of HCl, MSC significantly reduced the injurious MV-induced fibrosis. HCl injury suppressed E markers and up-regulated M markers. MSC delivery 2 days after HCl further amplified M marker expression, indicating their role in myofibroblast proliferation/activation. While with 14-day delivery E marker up-regulation was observed indicating their role in epithelial restoration. Conclusions: Early MSC delivery can be protective of injurious MV. Late MSC delivery during repair phase may also aid in recovery. However, early MSC delivery during the exudative inflammatory phase of HCl-induced ARDS can result in pro-fibrotic profiles. It is critical to understand the interaction between MSC and the lung microenvironment before MSC-based therapies are utilized for ARDS.Keywords: acute respiratory distress syndrome (ARDS), mesenchymal stem cells (MSC), hydrochloric acid (HCl), mechanical ventilation (MV)
Procedia PDF Downloads 6711135 Impacts of Public Insurance on Health Access and Outcomes: Evidence from India
Authors: Titir Bhattacharya, Tanika Chakraborty, Prabal K. De
Abstract:
Maternal and child health continue to be a significant policy focus in developing countries, including India. An emerging model in health care is the creation of public and private partnerships. Since the construction of physical infrastructure is costly, governments at various levels have tried to implement social health insurance schemes where a trust calculates insurance premiums and medical payments. Typically, qualifying families get full subsidization of the premium and get access to private hospitals, in addition to low cost public hospitals, for their tertiary care needs. We analyze one such pioneering social insurance scheme in the Indian state of Andhra Pradesh (AP). The Rajiv Aarogyasri program (RA) was introduced by the Government of AP on a pilot basis in 2007 and implemented in 2008. In this paper, we first examine the extent to which access to reproductive health care changed. For example, the RA scheme reimburses hospital deliveries leading us to expect an increase in institutional deliveries, particularly in private hospitals. Second, we expect an increase in institutional deliveries to also improve child health outcomes. Hence, we estimate if the program improved infant and child mortality. We use District Level Health Survey data to create annual birth cohorts from 2000-2015. Since AP was the only state in which such a state insurance program was implemented, the neighboring states constituted a plausible control group. Combined with the policy timing, and the year of birth, we employ a difference-indifference strategy to identify the effects of RA on the residents of AP. We perform several checks against threats to identification, including testing for pre-treatment trends between the treatment and control states. We find that the policy significantly lowered infant and child mortality in AP. We also find that deliveries in private hospitals increased, and government hospitals decreased, showing a substitution effect of the relative price change. Finally, as expected, out-of-pocket costs declined for the treatment group. However, we do not find any significant effects for usual preventive care such as vaccination, showing that benefits of insurance schemes targeted at the tertiary level may not trickle down to the primary care level.Keywords: public health insurance, maternal and child health, public-private choice
Procedia PDF Downloads 981134 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 1271133 Examination of Relationship between Internet Addiction and Cyber Bullying in Adolescents
Authors: Adem Peker, Yüksel Eroğlu, İsmail Ay
Abstract:
As the information and communication technologies have become embedded in everyday life of adolescents, both their possible benefits and risks to adolescents are being identified. The information and communication technologies provide opportunities for adolescents to connect with peers and to access to information. However, as with other social connections, users of information and communication devices have the potential to meet and interact with in harmful ways. One emerging example of such interaction is cyber bullying. Cyber bullying occurs when someone uses the information and communication technologies to harass or embarrass another person. Cyber bullying can take the form of malicious text messages and e-mails, spreading rumours, and excluding people from online groups. Cyber bullying has been linked to psychological problems for cyber bullies and victims. Therefore, it is important to determine how internet addiction contributes to cyber bullying. Building on this question, this study takes a closer look at the relationship between internet addiction and cyber bullying. For this purpose, in this study, based on descriptive relational model, it was hypothesized that loss of control, excessive desire to stay online, and negativity in social relationships, which are dimensions of internet addiction, would be associated positively with cyber bullying and victimization. Participants were 383 high school students (176 girls and 207 boys; mean age, 15.7 years). Internet addiction was measured by using Internet Addiction Scale. The Cyber Victim and Bullying Scale was utilized to measure cyber bullying and victimization. The scales were administered to the students in groups in the classrooms. In this study, stepwise regression analyses were utilized to examine the relationships between dimensions of internet addiction and cyber bullying and victimization. Before applying stepwise regression analysis, assumptions of regression were verified. According to stepwise regression analysis, cyber bullying was predicted by loss of control (β=.26, p<.001) and negativity in social relationships (β=.13, p<.001). These variables accounted for 9 % of the total variance, with the loss of control explaining the higher percentage (8 %). On the other hand, cyber victimization was predicted by loss of control (β=.19, p<.001) and negativity in social relationships (β=.12, p<.001). These variables altogether accounted for 8 % of the variance in cyber victimization, with the best predictor loss of control (7 % of the total variance). The results of this study demonstrated that, as expected, loss of control and negativity in social relationships predicted cyber bullying and victimization positively. However, excessive desire to stay online did not emerge a significant predictor of both cyberbullying and victimization. Consequently, this study would enhance our understanding of the predictors of cyber bullying and victimization since the results proposed that internet addiction is related with cyber bullying and victimization.Keywords: cyber bullying, internet addiction, adolescents, regression
Procedia PDF Downloads 3141132 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving
Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem
Abstract:
This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.Keywords: energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability
Procedia PDF Downloads 3031131 Degradation and Detoxification of Tetracycline by Sono-Fenton and Ozonation
Authors: Chikang Wang, Jhongjheng Jian, Poming Huang
Abstract:
Among a wide variety of pharmaceutical compounds, tetracycline antibiotics are one of the largest groups of pharmaceutical compounds extensively used in human and veterinary medicine to treat and prevent bacterial infections. Because it is water soluble, biologically active, stable and bio-refractory, release to the environment threatens aquatic life and increases the risk posed by antibiotic-resistant pathogens. In practice, due to its antibacterial nature, tetracycline cannot be effectively destructed by traditional biological methods. Hence, in this study, two advanced oxidation processes such as ozonation and sono-Fenton processes were conducted individually to degrade the tetracycline for investigating their feasibility on tetracycline degradation. Effect of operational variables on tetracycline degradation, release of nitrogen and change of toxicity were also proposed. Initial tetracycline concentration was 50 mg/L. To evaluate the efficiency of tetracycline degradation by ozonation, the ozone gas was produced by an ozone generator (Model LAB2B, Ozonia) and introduced into the reactor with different flows (25 - 500 mL/min) at varying pH levels (pH 3 - pH 11) and reaction temperatures (15 - 55°C). In sono-Fenton system, an ultrasonic transducer (Microson VCX 750, USA) operated at 20 kHz combined with H₂O₂ (2 mM) and Fe²⁺ (0.2 mM) were carried out at different pH levels (pH 3 - pH 11), aeration gas and flows (air and oxygen; 0.2 - 1.0 L/min), tetracycline concentrations (10 - 200 mg/L), reaction temperatures (15 - 55°C) and ultrasonic powers (25 - 200 Watts), respectively. Sole ultrasound was ineffective on tetracycline degradation, where the degradation efficiencies were lower than 10% with 60 min reaction. Contribution of Fe²⁺ and H₂O₂ on the degradation of tetracycline was significant, where the maximum tetracycline degradation efficiency in sono-Fenton process was as high as 91.3% followed by 45.8% mineralization. Effect of initial pH level on tetracycline degradation was insignificant from pH 3 to pH 6 but significantly decreased as the pH was greater than pH 7. Increase of the ultrasonic power was slightly increased the degradation efficiency of tetracycline, which indicated that the hydroxyl radicals dominated the oxidation of tetracycline. Effects of aeration of air or oxygen with different flows and reaction temperatures were insignificant. Ozonation showed better efficiencies in tetracycline degradation, where the optimum reaction condition was found at pH 3, 100 mL O₃/min and 25°C with 94% degradation and 60% mineralization. The toxicity of tetracycline was significantly decreased due to the mineralization of tetracycline. In addition, less than 10% of nitrogen content was released to solution phase as NH₃-N, and the most degraded tetracycline cannot be full mineralized to CO₂. The results shown in this study indicated that both the sono-Fenton process and ozonation can effectively degrade the tetracycline and reduce its toxicity at profitable condition. The costs of two systems needed to be further investigated to understand the feasibility in tetracycline degradation.Keywords: degradation, detoxification, mineralization, ozonation, sono-Fenton process, tetracycline
Procedia PDF Downloads 2701130 Dexamethasone Treatment Deregulates Proteoglycans Expression in Normal Brain Tissue
Authors: A. Y. Tsidulko, T. M. Pankova, E. V. Grigorieva
Abstract:
High-grade gliomas are the most frequent and most aggressive brain tumors which are characterized by active invasion of tumor cells into the surrounding brain tissue, where the extracellular matrix (ECM) plays a crucial role. Disruption of ECM can be involved in anticancer drugs effectiveness, side-effects and also in tumor relapses. The anti-inflammatory agent dexamethasone is a common drug used during high-grade glioma treatment for alleviating cerebral edema. Although dexamethasone is widely used in the clinic, its effects on normal brain tissue ECM remain poorly investigated. It is known that proteoglycans (PGs) are a major component of the extracellular matrix in the central nervous system. In our work, we studied the effects of dexamethasone on the ECM proteoglycans (syndecan-1, glypican-1, perlecan, versican, brevican, NG2, decorin, biglican, lumican) using RT-PCR in the experimental animal model. It was shown that proteoglycans in rat brain have age-specific expression patterns. In early post-natal rat brain (8 days old rat pups) overall PGs expression was quite high and mainly expressed PGs were biglycan, decorin, and syndecan-1. The overall transcriptional activity of PGs in adult rat brain is 1.5-fold decreased compared to post-natal brain. The expression pattern was changed as well with biglycan, decorin, syndecan-1, glypican-1 and brevican becoming almost equally expressed. PGs expression patterns create a specific tissue microenvironment that differs in developing and adult brain. Dexamethasone regimen close to the one used in the clinic during high-grade glioma treatment significantly affects proteoglycans expression. It was shown that overall PGs transcription activity is 1.5-2-folds increased after dexamethasone treatment. The most up-regulated PGs were biglycan, decorin, and lumican. The PGs expression pattern in adult brain changed after treatment becoming quite close to the expression pattern in developing brain. It is known that microenvironment in developing tissues promotes cells proliferation while in adult tissues proliferation is usually suppressed. The changes occurring in the adult brain after dexamethasone treatment may lead to re-activation of cell proliferation due to signals from changed microenvironment. Taken together obtained data show that dexamethasone treatment significantly affects the normal brain ECM, creating the appropriate microenvironment for tumor cells proliferation and thus can reduce the effectiveness of anticancer treatment and promote tumor relapses. This work has been supported by a Russian Science Foundation (RSF Grant 16-15-10243)Keywords: dexamthasone, extracellular matrix, glioma, proteoglycan
Procedia PDF Downloads 2001129 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost
Procedia PDF Downloads 141128 Enhancement of Shelflife of Malta Fruit with Active Packaging
Authors: Rishi Richa, N. C. Shahi, J. P. Pandey, S. S. Kautkar
Abstract:
Citrus fruits rank third in area and production after banana and mango in India. Sweet oranges are the second largest citrus fruits cultivated in the country. Andhra Pradesh, Maharashtra, Karnataka, Punjab, Haryana, Rajasthan, and Uttarakhand are the main sweet orange-growing states. Citrus fruits occupy a leading position in the fruit trade of Uttarakhand, is casing about 14.38% of the total area under fruits and contributing nearly 17.75 % to the total fruit production. Malta is grown in most of the hill districts of the Uttarakhand. Malta common is having high acceptability due to its attractive colour, distinctive flavour, and taste. The excellent quality fruits are generally available for only one or two months. However due to its less shelf-life, Malta can not be stored for longer time under ambient conditions and cannot be transported to distant places. Continuous loss of water adversely affects the quality of Malta during storage and transportation. Method of picking, packaging, and cold storage has detrimental effects on moisture loss. The climatic condition such as ambient temperature, relative humidity, wind condition (aeration) and microbial attack greatly influences the rate of moisture loss and quality. Therefore, different agro-climatic zone will have different moisture loss pattern. The rate of moisture loss can be taken as one of the quality parameters in combination of one or more parameter such as RH, and aeration. The moisture contents of the fruits and vegetables determine their freshness. Hence, it is important to maintain initial moisture status of fruits and vegetable for prolonged period after the harvest. Keeping all points in views, effort was made to store Malta at ambient condition. In this study, the response surface method and experimental design were applied for optimization of independent variables to enhance the shelf life of four months stored malta. Box-Benkhen design, with, 12 factorial points and 5 replicates at the centre point were used to build a model for predicting and optimizing storage process parameters. The independent parameters, viz., scavenger (3, 4 and 5g), polythene thickness (75, 100 and 125 gauge) and fungicide concentration (100, 150 and 200ppm) were selected and analyzed. 5g scavenger, 125 gauge and 200ppm solution of fungicide are the optimized value for storage which may enhance life up to 4months.Keywords: Malta fruit, scavenger, packaging, shelf life
Procedia PDF Downloads 2811127 Technical and Economic Potential of Partial Electrification of Railway Lines
Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong
Abstract:
Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.Keywords: electrification, hybrid, railway, storage
Procedia PDF Downloads 4321126 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems
Authors: Moustafa Osman Mohammed
Abstract:
The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling
Procedia PDF Downloads 1581125 The Effective Use of the Network in the Distributed Storage
Authors: Mamouni Mohammed Dhiya Eddine
Abstract:
This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface
Procedia PDF Downloads 2221124 Analysis of Waterjet Propulsion System for an Amphibious Vehicle
Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian
Abstract:
This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion
Procedia PDF Downloads 231