Search results for: ferris wheel model
9319 Design & Development of a Static-Thrust Test-Bench for Aviation/UAV Based Piston Engines
Authors: Syed Muhammad Basit Ali, Usama Saleem, Irtiza Ali
Abstract:
Internal combustion engines have been pioneers in the aviation industry, use of piston engines for aircraft propulsion, from propeller-driven bi-planes to turbo-prop, commercial, and cargo airliners. To provide an adequate amount of thrust piston engine rotates the propeller at a specific rpm, allowing enough mass airflow. Thrust is the only forward-acting force of an aircraft that helps heavier than air bodies to fly, depending on the mathematical model and variables included in that with the correct measurement. Test-benches have been a bench-mark in the aerospace industry to analyse the results before a flight, having paramount significance in reliability and safety engineering, depending on the mathematical model and variables included in that with the correct measurement. Calculation of thrust from a piston engine also depends on environmental changes, the diameter of the propeller, and the density of air. The project would be centered on piston engines used in the aviation industry for light aircraft and UAVs. A static thrust test bench involves various units, each performing a designed purpose to monitor and display. Static thrust tests are performed on the ground, and safety concerns hold paramount importance. The execution of this study involves research, design, manufacturing, and results based on reverse engineering initiating from virtual design, analytical analysis, and simulations. The final evaluation of results gathered from various methods such as co-relation between conventional mass-spring and digital loadcell. On average, we received 17.5kg of thrust (25+ engine run-ups – around 40 hours of engine run), only 10% deviation from analytically calculated thrust –providing 90% accuracy.Keywords: aviation, aeronautics, static thrust, test bench, aircraft maintenance
Procedia PDF Downloads 4149318 Smallholder Farmers’ Adaptation Strategies and Socioeconomic Determinants of Climate Variability in Boset District, Oromia, Ethiopia
Authors: Hurgesa Hundera, Samuel Shibeshibikeko, Tarike Daba, Tesfaye Ganamo
Abstract:
The study aimed at examining the ongoing adaptation strategies used by smallholder farmers in response to climate variability in Boset district. It also assessed the socioeconomic factors that influence the choice of adaptation strategies of smallholder farmers to climate variability risk. For attaining the objectives of the study, both primary and secondary sources of data were employed. The primary data were obtained through a household questionnaire, key informant interviews, focus group discussions, and observations, while secondary data were acquired through desk review. Questionnaires were distributed and filled by 328 respondents, and they were identified through systematic random sampling technique. Descriptive statistics and binary logistic regression model were applied in this study as the main analytical methods. The findings of the study reveal that the sample households have utilized multiple adaptation strategies in response to climate variability, such as cropping early mature crops, planting drought resistant crops, growing mixed crops on the same farm lands, and others. The results of the binary logistic model revealed that education, sex, age, family size, off farm income, farm experience, access to climate information, access to farm input, and farm size were significant and key factors determining farmers’ choice of adaptation strategies to climate variability in the study area. To enable effective adaptation measures, Ministry of Agriculture and Natural Resource, with its regional bureaus and offices and concerned non–governmental organizations, should consider climate variability in their planning and budgeting in all levels of decision making.Keywords: adaptation strategies, boset district, climate variability, smallholder farmers
Procedia PDF Downloads 879317 Commercialization of Smallholder Rice Producers and Its Determinants in Ethiopia
Authors: Abebaw Assaye, Seiichi Sakurai, Marutama Atsush, Dawit Alemu
Abstract:
Rice is considered as a strategic agricultural commodity targeting national food security and import substitution in Ethiopia and diverse measures are put in place a number of initiatives to ensure the growth and development of rice sector in the country. This study assessed factors that influence smallholder farmers' level of rice commercialization in Ethiopia. The required data were generated from 594 randomly sampled rice producers using multi-stage sampling techniques from four major rice-producing regional states. Both descriptive and econometric methods were used to analyze the data. We adopted the ordered probit model to analyze factors determining output commercialization in the rice market. The ordered probit model result showed that the sex of the household head, educational status of the household head, credit use, proportion of irrigated land cultivated, membership in social groups, and land dedicated to rice production were found to influence significantly and positively the probability of being commercial-oriented. Conversely, the age of the household, total cultivated land, and distance to the main market were found to influence negatively. These findings suggest that promoting productivity-increasing technologies, development of irrigation facilities, strengthening of social institutions, and facilitating access to credit are crucial for enhancing the commercialization of rice in the study area. Since agricultural lands are limited, intensified farming through promoting improved rice technologies and mechanized farming could be an option to enhance marketable surplus and increase level of rice market particicpation.Keywords: rice, commercialization, Tobit, ordered probit, Ethiopia
Procedia PDF Downloads 839316 Model Evaluation of Thermal Effects Created by Cell Membrane Electroporation
Authors: Jiahui Song
Abstract:
The use of very high electric fields (~ 100kV/cm or higher) with pulse durations in the nanosecond range has been a recent development. The electric pulses have been used as tools to generate electroporation which has many biomedical applications. Most of the studies of electroporation have ignored possible thermal effects because of the small duration of the applied voltage pulses. However, it has been predicted membrane temperature gradients ranging from 0.2×109 to 109 K/m. This research focuses on thermal gradients that drives for electroporative enhancements, even though the actual temperature values might not have changed appreciably from their equilibrium levels. The dynamics of pore formation with the application of an externally applied electric field is studied on the basis of molecular dynamics (MD) simulations using the GROMACS package. Different temperatures are assigned to various regions to simulate the appropriate temperature gradients. The GROMACS provides the force fields for the lipid membranes, which is taken to comprise of dipalmitoyl-phosphatidyl-choline (DPPC) molecules. The water model mimicks the aqueous environment surrounding the membrane. Velocities of water and membrane molecules are generated randomly at each simulation run according to a Maxwellian distribution. For statistical significance, a total of eight MD simulations are carried out with different starting molecular velocities for each simulation. MD simulation shows no pore is formed in a 10-ns snapshot for a DPPC membrane set at a uniform temperature of 295 K after a 0.4 V/nm electric field is applied. A nano-sized pore is clearly seen in a 10-ns snapshot on the same geometry but with the top and bottom membrane surfaces kept at temperatures of 300 and 295 K, respectively. For the same applied electric field, the formation of nanopores is clearly demonstrated, but only in the presence of a temperature gradient. MD simulation results show enhanced electroporative effects arising from thermal gradients. The study suggests the temperature gradient is a secondary driver, with the electric field being the primary cause for electroporation.Keywords: nanosecond, electroporation, thermal effects, molecular dynamics
Procedia PDF Downloads 829315 Water Re-Use Optimization in a Sugar Platform Biorefinery Using Municipal Solid Waste
Authors: Leo Paul Vaurs, Sonia Heaven, Charles Banks
Abstract:
Municipal solid waste (MSW) is a virtually unlimited source of lignocellulosic material in the form of a waste paper/cardboard mixture which can be converted into fermentable sugars via cellulolytic enzyme hydrolysis in a biorefinery. The extraction of the lignocellulosic fraction and its preparation, however, are energy and water demanding processes. The waste water generated is a rich organic liquor with a high Chemical Oxygen Demand that can be partially cleaned while generating biogas in an Upflow Anaerobic Sludge Blanket bioreactor and be further re-used in the process. In this work, an experiment was designed to determine the critical contaminant concentrations in water affecting either anaerobic digestion or enzymatic hydrolysis by simulating multiple water re-circulations. It was found that re-using more than 16.5 times the same water could decrease the hydrolysis yield by up to 65 % and led to a complete granules desegregation. Due to the complexity of the water stream, the contaminant(s) responsible for the performance decrease could not be identified but it was suspected to be caused by sodium, potassium, lipid accumulation for the anaerobic digestion (AD) process and heavy metal build-up for enzymatic hydrolysis. The experimental data were incorporated into a Water Pinch technology based model that was used to optimize the water re-utilization in the modelled system to reduce fresh water requirement and wastewater generation while ensuring all processes performed at optimal level. Multiple scenarios were modelled in which sub-process requirements were evaluated in term of importance, operational costs and impact on the CAPEX. The best compromise between water usage, AD and enzymatic hydrolysis yield was determined for each assumed contaminant degradations by anaerobic granules. Results from the model will be used to build the first MSW based biorefinery in the USA.Keywords: anaerobic digestion, enzymatic hydrolysis, municipal solid waste, water optimization
Procedia PDF Downloads 3209314 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media
Authors: Naila Nasreen, Dianchen Lu
Abstract:
This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena
Procedia PDF Downloads 1019313 The Role of University in High-Level Human Capital Cultivation in China’s West Greater Bay Area
Authors: Rochelle Yun Ge
Abstract:
University has played an active role in the country’s development in China. There has been an increasing research interest on the development of higher education cooperation, talent cultivation and attraction, and innovation in the regional development. The Triple Helix model, which indicates that regional innovation and development can be engendered by collaboration among university, industry and government, is often adopted as research framework. The research using triple helix model emphasizes the active and often leading role of university in knowledge-based economy. Within this framework, universities are conceptualized as key institutions of knowledge production, transmission and transference potentially making critical contributions to regional development. Recent research almost uniformly consistent in indicating the high-level research labours (i.e., doctoral, post-doctoral researchers and academics) as important actors in the innovation ecosystem with their cross-geographical human capital and resources presented. In 2019, the development of the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) was officially launched as an important strategy by the Chinese government to boost the regional development of the Pearl River Delta and to support the realization of “One Belt One Road” strategy. Human Capital formation is at the center of this plan. One of the strategic goals of the GBA development is set to evolve into an international educational hub and innovation center with high-level talents. A number of policies have been issued to attract and cultivate human resources in different GBA cities, in particular for the high-level R&D (research and development) talents such as doctoral and post-doctoral researchers. To better understand the development of high-level talents hub in the GBA, more empirical considerations should be given to explore the approaches of talents cultivation and attraction in the GBA. What remains to explore is the ways to better attract, train, support and retain these talents in the cross-systems context. This paper aims to investigate the role of university in human capital development under China’s national agenda of GBA integration through the lens of universities and actors. Two flagship comprehensive universities are selected to be the cases and 30 interviews with university officials, research leaders, post-doctors and doctoral candidates are used for analysis. In particular, we look at in what ways have universities aligned their strategies and practices to the Chinese government’s GBA development strategy? What strategies and practices have been developed by universities for the cultivation and attraction of high-level research labor? And what impacts the universities have made for the regional development? The main arguments of this research highlights the specific ways in which universities in smaller sub-regions can collaborate in high-level human capital formation and the role policy can play in facilitating such collaborations.Keywords: university, human capital, regional development, triple-helix model
Procedia PDF Downloads 1139312 Modeling Sorption and Permeation in the Separation of Benzene/ Cyclohexane Mixtures through Styrene-Butadiene Rubber Crosslinked Membranes
Authors: Hassiba Benguergoura, Kamal Chanane, Sâad Moulay
Abstract:
Pervaporation (PV), a membrane-based separation technology, has gained much attention because of its energy saving capability and low-cost, especially for separation of azeotropic or close-boiling liquid mixtures. There are two crucial issues for industrial application of pervaporation process. The first is developing membrane material and tailoring membrane structure to obtain high pervaporation performances. The second is modeling pervaporation transport to better understand of the above-mentioned structure–pervaporation relationship. Many models were proposed to predict the mass transfer process, among them, solution-diffusion model is most widely used in describing pervaporation transport including preferential sorption, diffusion and evaporation steps. For modeling pervaporation transport, the permeation flux, which depends on the solubility and diffusivity of components in the membrane, should be obtained first. Traditionally, the solubility was calculated according to the Flory–Huggins theory. Separation of the benzene (Bz)/cyclohexane (Cx) mixture is industrially significant. Numerous papers have been focused on the Bz/Cx system to assess the PV properties of membrane materials. Membranes with both high permeability and selectivity are desirable for practical application. Several new polymers have been prepared to get both high permeability and selectivity. Styrene-butadiene rubbers (SBR), dense membranes cross-linked by chloromethylation were used in the separation of benzene/cyclohexane mixtures. The impact of chloromethylation reaction as a new method of cross-linking SBR on the pervaporation performance have been reported. In contrast to the vulcanization with sulfur, the cross-linking takes places on styrene units of polymeric chains via a methylene bridge. The partial pervaporative (PV) fluxes of benzene/cyclohexane mixtures in styrene-butadiene rubber (SBR) were predicted using Fick's first law. The predicted partial fluxes and the PV separation factor agreed well with the experimental data by integrating Fick's law over the benzene concentration. The effects of feed concentration and operating temperature on the predicted permeation flux by this proposed model are investigated. The predicted permeation fluxes are in good agreement with experimental data at lower benzene concentration in feed, but at higher benzene concentration, the model overestimated permeation flux. The predicted and experimental permeation fluxes all increase with operating temperature increasing. Solvent sorption levels for benzene/ cyclohexane mixtures in a SBR membrane were determined experimentally. The results showed that the solvent sorption levels were strongly affected by the feed composition. The Flory- Huggins equation generates higher R-square coefficient for the sorption selectivity.Keywords: benzene, cyclohexane, pervaporation, permeation, sorption modeling, SBR
Procedia PDF Downloads 3279311 Comparati̇ve Study of Pi̇xel and Object-Based Image Classificati̇on Techni̇ques for Extracti̇on of Land Use/Land Cover Informati̇on
Authors: Mahesh Kumar Jat, Manisha Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) have been used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: remote sensing, GIS, object based, classification
Procedia PDF Downloads 1329310 The Effect of Recycling on Price Volatility of Critical Metals in the EU (2010-2019): An Application of Multivariate GARCH Family Models
Authors: Marc Evenst Jn Jacques, Sophie Bernard
Abstract:
Electrical and electronic applications, as well as rechargeable batteries, are common in any economy. They also contain a number of important and valuable metals. It is critical to investigate the impact of these new materials or volume sources on the metal market dynamics. This paper investigates the impact of responsible recycling within the European region on metal price volatility. As far as we know, no empirical studies have been conducted to assess the role of metal recycling in metal market price volatility. The goal of this paper is to test the claim that metal recycling helps to cushion price volatility. A set of circular economy indicators/variables, namely, 1) annual total trade values of recycled metals, 2) annual volume of scrap traded and 3) circular material use rate, and 4) information about recycling, are used to estimate the volatility of monthly spot prices of regular metals. A combination of the GARCH-MIDAS model for mixed frequency data sampling and a simple GARCH (1,1) model for the same frequency variables was adopted to examine the potential links between each variable and price volatility. We discovered that from 2010 to 2019, except for Nickel, scrap consumption (Millions of tons), Scrap Trade Values, and Recycled Material use rate had no significant impact on the price volatility of standard metals (Aluminum, Lead) and precious metals (Gold and Platinum). Worldwide interest in recycling has no impact on returns or volatility. Specific interest in metal recycling did have a link to the mean return equation for Aluminum, Gold and to the volatility equation for lead and Nickel.Keywords: recycling, circular economy, price volatility, GARCH, mixed data sampling
Procedia PDF Downloads 579309 Unveiling the Dynamics of Preservice Teachers’ Engagement with Mathematical Modeling through Model Eliciting Activities: A Comprehensive Exploration of Acceptance and Resistance Towards Modeling and Its Pedagogy
Authors: Ozgul Kartal, Wade Tillett, Lyn D. English
Abstract:
Despite its global significance in curricula, mathematical modeling encounters persistent disparities in recognition and emphasis within regular mathematics classrooms and teacher education across countries with diverse educational and cultural traditions, including variations in the perceived role of mathematical modeling. Over the past two decades, increased attention has been given to the integration of mathematical modeling into national curriculum standards in the U.S. and other countries. Therefore, the mathematics education research community has dedicated significant efforts to investigate various aspects associated with the teaching and learning of mathematical modeling, primarily focusing on exploring the applicability of modeling in schools and assessing students', teachers', and preservice teachers' (PTs) competencies and engagement in modeling cycles and processes. However, limited attention has been directed toward examining potential resistance hindering teachers and PTs from effectively implementing mathematical modeling. This study focuses on how PTs, without prior modeling experience, resist and/or embrace mathematical modeling and its pedagogy as they learn about models and modeling perspectives, navigate the modeling process, design and implement their modeling activities and lesson plans, and experience the pedagogy enabling modeling. Model eliciting activities (MEAs) were employed due to their high potential to support the development of mathematical modeling pedagogy. The mathematical modeling module was integrated into a mathematics methods course to explore how PTs embraced or resisted mathematical modeling and its pedagogy. The module design included reading, reflecting, engaging in modeling, assessing models, creating a modeling task (MEA), and designing a modeling lesson employing an MEA. Twelve senior undergraduate students participated, and data collection involved video recordings, written prompts, lesson plans, and reflections. An open coding analysis revealed acceptance and resistance toward teaching mathematical modeling. The study identified four overarching themes, including both acceptance and resistance: pedagogy, affordance of modeling (tasks), modeling actions, and adjusting modeling. In the category of pedagogy, PTs displayed acceptance based on potential pedagogical benefits and resistance due to various concerns. The affordance of modeling (tasks) category emerged from instances when PTs showed acceptance or resistance while discussing the nature and quality of modeling tasks, often debating whether modeling is considered mathematics. PTs demonstrated both acceptance and resistance in their modeling actions, engaging in modeling cycles as students and designing/implementing MEAs as teachers. The adjusting modeling category captured instances where PTs accepted or resisted maintaining the qualities and nature of the modeling experience or converted modeling into a typical structured mathematics experience for students. While PTs displayed a mix of acceptance and resistance in their modeling actions, limitations were observed in embracing complexity and adhering to model principles. The study provides valuable insights into the challenges and opportunities of integrating mathematical modeling into teacher education, emphasizing the importance of addressing pedagogical concerns and providing support for effective implementation. In conclusion, this research offers a comprehensive understanding of PTs' engagement with modeling, advocating for a more focused discussion on the distinct nature and significance of mathematical modeling in the broader curriculum to establish a foundation for effective teacher education programs.Keywords: mathematical modeling, model eliciting activities, modeling pedagogy, secondary teacher education
Procedia PDF Downloads 659308 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy
Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll
Abstract:
ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy
Procedia PDF Downloads 2289307 Solution-Focused Wellness: An Evidence-Based Approach to Wellness Promotion
Authors: James Beauchemin
Abstract:
Research indicates that college students are experiencing mental health challenges of greater severity, and an increased number of students are seeking help. Contributing to the compromised wellness of the college student population are the prevalence of unhealthy lifestyle habits and behaviors such as alcohol consumption, tobacco use, dietary concerns, risky sexual behaviors, and lack of physical activity. Alternative approaches are needed for this population that emphasize prevention and holistic lifestyle change that mitigate mental health and wellness challenges and alleviate strain on campus resources. This presentation will introduce a Solution-Focused Wellness (SFW) intervention model and examine wellness domains solution-focused strategies to promote personal well-being, and provide supporting research from multiple studies that illustrate intervention effectiveness with a collegiate population. Given the subjective and personal nature of wellness, a therapeutic approach that provides the opportunity for individuals to conceptualize and operationalize wellness themselves is critical to facilitating lasting wellness-based change. Solution-Focused Brief Therapy (SFBT) is a strength-based modality defined by its emphasis on constructing solutions rather than focusing on problems and the assumption that clients have the resources and capacity to change. SFBT has demonstrated effectiveness as a brief therapeutic intervention with the college population in groups and related to health and wellness. By integrating SFBT strategies with personal wellness, a brief intervention was developed to support college students in establishing lifestyles trends consistent with their conceptualizations of wellness. Research supports the effectiveness of a SFW model in improving college student wellness in both face-to-face and web-based formats. Outcomes of controlled and longitudinal studies will be presented, demonstrating significant improvements in perceptions of stress, life satisfaction, happiness, mental health, well-being, and resilience. Overall, there is compelling evidence that utilization of a Solution-Focused Brief Therapy approach with college students can help to improve personal wellness and establish healthy lifestyle trends, providing an effective prevention-focused strategy for college counseling centers and wellness centers to employ. Primary research objectives include: 1)establish an evidence-based approach to facilitating wellness pro motion among the college student population, 2) examine the effectiveness of a Solution-Focused Wellness (SFW) intervention model in decreasing stress, improving personal wellness, mental health, life satisfaction, and resiliency,3) investigate intervention impacts over time (e.g. 6-week post-intervention), and 4) demonstrate SFW intervention utility in wellness promotion and associated outcomes when compared with no-treatment control, and alternative intervention approaches.Keywords: wellness, college students, solution-focused, prevention
Procedia PDF Downloads 729306 Modelling Insider Attacks in Public Cloud
Authors: Roman Kulikov, Svetlana Kolesnikova
Abstract:
Last decade Cloud Computing technologies have been rapidly becoming ubiquitous. Each year more and more organizations, corporations, internet services and social networks trust their business sensitive information to Public Cloud. The data storage in Public Cloud is protected by security mechanisms such as firewalls, cryptography algorithms, backups, etc.. In this way, however, only outsider attacks can be prevented, whereas virtualization tools can be easily compromised by insider. The protection of Public Cloud’s critical elements from internal intruder remains extremely challenging. A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems (OS) to share a single hardware processor in Cloud Computing. One of the hypervisor's functions is to enforce access control policies. Furthermore, it prevents guest OS from disrupting each other and from accessing each other's memory or disk space. Hypervisor is the one of the most critical and vulnerable elements in Cloud Computing infrastructure. Nevertheless, it has been poorly protected from being compromised by insider. By exploiting certain vulnerabilities, privilege escalation can be easily achieved in insider attacks on hypervisor. In this way, an internal intruder, who has compromised one process, is able to gain control of the entire virtual machine. Thereafter, the consequences of insider attacks in Public Cloud might be more catastrophic and significant to virtual tools and sensitive data than of outsider attacks. So far, almost no preventive security countermeasures have been developed. There has been little attention paid for developing models to assist risks mitigation strategies. In this paper formal model of insider attacks on hypervisor is designed. Our analysis identifies critical hypervisor`s vulnerabilities that can be easily compromised by internal intruder. Consequently, possible conditions for successful attacks implementation are uncovered. Hence, development of preventive security countermeasures can be improved on the basis of the proposed model.Keywords: insider attack, public cloud, cloud computing, hypervisor
Procedia PDF Downloads 3619305 Programmatic Actions of Social Welfare State in Service to Justice: Law, Society and the Third Sector
Authors: Bruno Valverde Chahaira, Matheus Jeronimo Low Lopes, Marta Beatriz Tanaka Ferdinandi
Abstract:
This paper proposes to dissect the meanings and / or directions of the State, in order, to present the State models to elaborate a conceptual framework about its function in the legal scope. To do so, it points out the possible contracts established between the State and the Society, since the general principles immanent in them can guide the models of society in force. From this orientation arise the contracts, whose purpose is by the effect to modify the status (the being and / or the opinion) of each of the subjects in presence - State and Society. In this logic, this paper announces the fiduciary contracts and “veredicção”(portuguese word) contracts, from the perspective of semiotics discourse (or greimasian). Therefore, studies focus on the issue of manifest language in unilateral and bilateral or reciprocal relations between the State and Society. Thus, under the biases of the model of the communicative situation and discourse, the guidelines of these contractual relations will be analyzed in order to see if there is a pragmatic sanction: positive when the contract is signed between the subjects (reward), or negative when the contract between they are broken (punishment). In this way, a third path emerges which, in this specific case, passes through the subject-third sector. In other words, the proposal, which is systemic in nature, is to analyze whether, since the contract of the welfare state is not carried out in the constitutional program on fundamental rights: education, health, housing, an others. Therefore, in the structure of the exchange demanded by the society according to its contractual obligations (others), the third way (Third Sector) advances in the empty space left by the State. In this line, it presents the modalities of action of the third sector in the social scope. Finally, the normative communication organization of these three subjects is sought in the pragmatic model of discourse, namely: State, Society and Third Sector, in an attempt to understand the constant dynamics in the Law and in the language of the relations established between them.Keywords: access to justice, state, social rights, third sector
Procedia PDF Downloads 1459304 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu
Authors: Ammarah Irum, Muhammad Ali Tahir
Abstract:
Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language
Procedia PDF Downloads 729303 Analysis of Differentially Expressed Genes in Spontaneously Occurring Canine Melanoma
Authors: Simona Perga, Chiara Beltramo, Floriana Fruscione, Isabella Martini, Federica Cavallo, Federica Riccardo, Paolo Buracco, Selina Iussich, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari, Paola Modesto
Abstract:
Introduction: Human and canine melanoma have common clinical, histologic characteristics making dogs a good model for comparative oncology. The identification of specific genes and a better understanding of the genetic landscape, signaling pathways, and tumor–microenvironmental interactions involved in the cancer onset and progression is essential for the development of therapeutic strategies against this tumor in both species. In the present study, the differential expression of genes in spontaneously occurring canine melanoma and in paired normal tissue was investigated by targeted RNAseq. Material and Methods: Total RNA was extracted from 17 canine malignant melanoma (CMM) samples and from five paired normal tissues stored in RNA-later. In order to capture the greater genetic variability, gene expression analysis was carried out using two panels (Qiagen): Human Immuno-Oncology (HIO) and Mouse-Immuno-Oncology (MIO) and the miSeq platform (Illumina). These kits allow the detection of the expression profile of 990 genes involved in the immune response against tumors in humans and mice. The data were analyzed through the CLCbio Genomics Workbench (Qiagen) software using the Canis lupus familiaris genome as a reference. Data analysis were carried out both comparing the biologic group (tumoral vs. healthy tissues) and comparing neoplastic tissue vs. paired healthy tissue; a Fold Change greater than two and a p-value less than 0.05 were set as the threshold to select interesting genes. Results and Discussion: Using HIO 63, down-regulated genes were detected; 13 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Eighteen genes were up-regulated, 14 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Using the MIO, 35 down regulated-genes were detected; only four of these were down-regulated, also comparing neoplastic sample vs. paired healthy tissue. Twelve genes were up-regulated in both types of analysis. Considering the two kits, the greatest variation in Fold Change was in up-regulated genes. Dogs displayed a greater genetic homology with humans than mice; moreover, the results have shown that the two kits are able to detect different genes. Most of these genes have specific cellular functions or belong to some enzymatic categories; some have already been described to be correlated to human melanoma and confirm the validity of the dog as a model for the study of molecular aspects of human melanoma.Keywords: animal model, canine melanoma, gene expression, spontaneous tumors, targeted RNAseq
Procedia PDF Downloads 1999302 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment
Authors: Ujjwall Sai Sunder Uppuluri
Abstract:
Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.Keywords: complex systems, evolutionary theory, group theory, international political economy
Procedia PDF Downloads 1409301 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations
Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman
Abstract:
Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.Keywords: block, backward differentiation formulas, first order, fuzzy differential equations
Procedia PDF Downloads 3199300 Enhancing Nursing Teams' Learning: The Role of Team Accountability and Team Resources
Authors: Sarit Rashkovits, Anat Drach- Zahavy
Abstract:
The research considers the unresolved question regarding the link between nursing team accountability and team learning and the resulted team performance in nursing teams. Empirical findings reveal disappointing evidence regarding improvement in healthcare safety and quality. Therefore, there is a need in advancing managerial knowledge regarding the factors that enhance constant healthcare teams' proactive improvement efforts, meaning team learning. We first aim to identify the organizational resources that are needed for team learning in nursing teams; second, to test the moderating role of nursing teams' learning resources in the team accountability-team learning link; and third, to test the moderated mediation model suggesting that nursing teams' accountability affects team performance by enhancing team learning when relevant resources are available to the team. We point on the intervening role of three team learning resources, namely time availability, team autonomy and performance data on the relation between team accountability and team learning and test the proposed moderated mediation model on 44 nursing teams (462 nurses and 44 nursing managers). The results showed that, as was expected, there was a positive significant link between team accountability and team learning and the subsequent team performance when time availability and team autonomy were high rather than low. Nevertheless, the positive team accountability- team learning link was significant when team performance feedback was low rather than high. Accordingly, there was a positive mediated effect of team accountability on team performance via team learning when either time availability or team autonomy were high and the availability of team performance data was low. Nevertheless, this mediated effect was negative when time availability and team autonomy were low and the availability of team performance data was high. We conclude that nurturing team accountability is not enough for achieving nursing teams' learning and the subsequent improved team performance. Rather there is need to provide nursing teams with adequate time, autonomy, and be cautious with performance feedback, as the latter may motivate nursing teams to repeat routine work strategies rather than explore improved ones.Keywords: nursing teams' accountability, nursing teams' learning, performance feedback, teams' autonomy
Procedia PDF Downloads 2649299 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 189298 Approaching a Tat-Rev Independent HIV-1 Clone towards a Model for Research
Authors: Walter Vera-Ortega, Idoia Busnadiego, Sam J. Wilson
Abstract:
Introduction: Human Immunodeficiency Virus type 1 (HIV-1) is responsible for the acquired immunodeficiency syndrome (AIDS), a leading cause of death worldwide infecting millions of people each year. Despite intensive research in vaccine development, therapies against HIV-1 infection are not curative, and the huge genetic variability of HIV-1 challenges to drug development. Current animal models for HIV-1 research present important limitations, impairing the progress of in vivo approaches. Macaques require a CD8+ depletion to progress to AIDS, and the maintenance cost is high. Mice are a cheaper alternative but need to be 'humanized,' and breeding is not possible. The development of an HIV-1 clone able to replicate in mice is a challenging proposal. The lack of human co-factors in mice impedes the function of the HIV-1 accessory proteins, Tat and Rev, hampering HIV-1 replication. However, Tat and Rev function can be replaced by constitutive/chimeric promoters, codon-optimized proteins and the constitutive transport element (CTE), generating a novel HIV-1 clone able to replicate in mice without disrupting the amino acid sequence of the virus. By minimally manipulating the genomic 'identity' of the virus, we propose the generation of an HIV-1 clone able to replicate in mice to assist in antiviral drug development. Methods: i) Plasmid construction: The chimeric promoters and CTE copies were cloned by PCR using lentiviral vectors as templates (pCGSW and pSIV-MPCG). Tat mutants were generated from replication competent HIV-1 plasmids (NHG and NL4-3). ii) Infectivity assays: Retroviral vectors were generated by transfection of human 293T cells and murine NIH 3T3 cells. Virus titre was determined by flow cytometry measuring GFP expression. Human B-cells (AA-2) and Hela cells (TZMbl) were used for infectivity assays. iii) Protein analysis: Tat protein expression was determined by TZMbl assay and HIV-1 capsid by western blot. Results: We have determined that NIH 3T3 cells are able to generate HIV-1 particles. However, they are not infectious, and further analysis needs to be performed. Codon-optimized HIV-1 constructs are efficiently made in 293T cells in a Tat and Rev independent manner and capable of packaging a competent genome in trans. CSGW is capable of generating infectious particles in the absence of Tat and Rev in human cells when 4 copies of the CTE are placed preceding the 3’LTR. HIV-1 Tat mutant clones encoding different promoters are functional during the first cycle of replication when Tat is added in trans. Conclusion: Our findings suggest that the development of an HIV-1 Tat-Rev independent clone is challenging but achievable aim. However, further investigations need to be developed prior presenting our HIV-1 clone as a candidate model for research.Keywords: codon-optimized, constitutive transport element, HIV-1, long terminal repeats, research model
Procedia PDF Downloads 3089297 Security Model for RFID Systems
Authors: John Ayoade
Abstract:
Radio Frequency Identification (RFID) has gained a lot of popularity in all walks of life due to its usefulness and diverse use of the technology in almost every application. However, there have been some security concerns most especially in regards to how authentic readers and tags can confirm their authenticity before confidential data is exchanged between them. In this paper, Kerberos protocol is adopted for the mutual authentication of RFID system components in order to ensure the secure communication between those components and to realize the authenticity of the communicating components.Keywords: RFID, security, mutual authentication, Kerberos
Procedia PDF Downloads 4699296 Micro-Scale Digital Image Correlation-Driven Finite Element Simulations of Deformation and Damage Initiation in Advanced High Strength Steels
Authors: Asim Alsharif, Christophe Pinna, Hassan Ghadbeigi
Abstract:
The development of next-generation advanced high strength steels (AHSS) used in the automotive industry requires a better understanding of local deformation and damage development at the scale of their microstructures. This work is focused on dual-phase DP1000 steels and involves micro-mechanical tensile testing inside a scanning electron microscope (SEM) combined with digital image correlation (DIC) to quantify the heterogeneity of deformation in both ferrite and martensite and its evolution up to fracture. Natural features of the microstructure are used for the correlation carried out using Davis LaVision software. Strain localization is observed in both phases with tensile strain values up to 130% and 110% recorded in ferrite and martensite respectively just before final fracture. Damage initiation sites have been observed during deformation in martensite but could not be correlated to local strain values. A finite element (FE) model of the microstructure has then been developed using Abaqus to map stress distributions over representative areas of the microstructure by forcing the model to deform as in the experiment using DIC-measured displacement maps as boundary conditions. A MATLAB code has been developed to automatically mesh the microstructure from SEM images and to map displacement vectors from DIC onto the FE mesh. Results show a correlation of damage initiation at the interface between ferrite and martensite with local principal stress values of about 1700MPa in the martensite phase. Damage in ferrite is now being investigated, and results are expected to bring new insight into damage development in DP steels.Keywords: advanced high strength steels, digital image correlation, finite element modelling, micro-mechanical testing
Procedia PDF Downloads 1469295 The Impact of Emotional Intelligence on Organizational Performance
Authors: El Ghazi Safae, Cherkaoui Mounia
Abstract:
Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.Keywords: emotions, performance, intelligence, firms
Procedia PDF Downloads 1089294 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 1469293 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1539292 Experimental Simulations of Aerosol Effect to Landfalling Tropical Cyclones over Philippine Coast: Virtual Seeding Using WRF Model
Authors: Bhenjamin Jordan L. Ona
Abstract:
Weather modification is an act of altering weather systems that catches interest on scientific studies. Cloud seeding is a common form of weather alteration. On the same principle, tropical cyclone mitigation experiment follows the methods of cloud seeding with intensity to account for. This study will present the effects of aerosol to tropical cyclone cloud microphysics and intensity. The framework of Weather Research and Forecasting (WRF) model incorporated with Thompson aerosol-aware scheme is the prime host to support the aerosol-cloud microphysics calculations of cloud condensation nuclei (CCN) ingested into the tropical cyclones before making landfall over the Philippine coast. The coupled microphysical and radiative effects of aerosols will be analyzed using numerical data conditions of Tropical Storm Ketsana (2009), Tropical Storm Washi (2011), and Typhoon Haiyan (2013) associated with varying CCN number concentrations per simulation per typhoon: clean maritime, polluted, and very polluted having 300 cm-3, 1000 cm-3, and 2000 cm-3 aerosol number initial concentrations, respectively. Aerosol species like sulphates, sea salts, black carbon, and organic carbon will be used as cloud nuclei and mineral dust as ice nuclei (IN). To make the study as realistic as possible, investigation during the biomass burning due to forest fire in Indonesia starting October 2015 as Typhoons Mujigae/Kabayan and Koppu/Lando had been seeded with aerosol emissions mainly comprises with black carbon and organic carbon, will be considered. Emission data that will be used is from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). The physical mechanism/s of intensification or deintensification of tropical cyclones will be determined after the seeding experiment analyses.Keywords: aerosol, CCN, IN, tropical cylone
Procedia PDF Downloads 2969291 Exploring the Relationships between Job Satisfaction, Work Engagement, and Loyalty of Academic Staff
Authors: Iveta Ludviga, Agita Kalvina
Abstract:
This paper aims to link together the concepts of job satisfaction, work engagement, trust, job meaningfulness and loyalty to the organisation focusing on specific type of employment–academic jobs. The research investigates the relationships between job satisfaction, work engagement and loyalty as well as the impact of trust and job meaningfulness on the work engagement and loyalty. The survey was conducted in one of the largest Latvian higher education institutions and the sample was drawn from academic staff (n=326). Structured questionnaire with 44 reflective type questions was developed to measure toe constructs. Data was analysed using SPSS and Smart-PLS software. Variance based structural equation modelling (PLS-SEM) technique was used to test the model and to predict the most important factors relevant to employee engagement and loyalty. The first order model included two endogenous constructs (loyalty and intention to stay and recommend, and employee engagement), as well as six exogenous constructs (feeling of fair treatment and trust in management; career growth opportunities; compensation, pay and benefits; management; colleagues; teamwork; and finally job meaningfulness). Job satisfaction was developed as second order construct and both: first and second order models were designed for data analysis. It was found that academics are more engaged than satisfied with their work and main reason for that was found to be job meaningfulness, which is significant predictor for work engagement, but not for job satisfaction. Compensation is not significantly related to work engagement, but only to job satisfaction. Trust was not significantly related neither to engagement, nor to satisfaction, however, it appeared to be significant predictor of loyalty and intentions to stay with the University. This paper revealed academic jobs as specific kind of employment where employees can be more engaged than satisfied and highlighted the specific role of job meaningfulness in the University settings.Keywords: job satisfaction, job meaningfulness, higher education, work engagement
Procedia PDF Downloads 2519290 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study
Authors: Faris Tarlochan, Siva Mahesh Tangutooru
Abstract:
Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses
Procedia PDF Downloads 279