Search results for: system model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29632

Search results for: system model

5512 Adaptive Strategies of European Sea Bass (Dicentrarchus labrax) to Ocean Acidification and Salinity Stress

Authors: Nitin Pipralia, Amit Kmar Sinha, Gudrun de Boeck

Abstract:

Atmospheric carbon dioxide (CO2) concentrations have been increasing since the beginning of the industrial revolution due to combustion of fossils fuel and many anthropogenic means. As the number of scenarios assembled by the International Panel on Climate Change (IPCC) predict a rise of pCO2 from today’s 380 μatm to approximately 900 μatm until the year 2100 and a further rise of up to 1900 μatm by the year 2300. A rise in pCO2 results in more dissolution in ocean surface water which lead to cange in water pH, This phenomena of decrease in ocean pH due to increase on pCO2 is ocean acidification is considered a potential threat to the marine ecosystems and expected to affect fish as well as calcerious organisms. The situation may get worste when the stress of salinity adds on, due to migratory movement of fishes, where fish moves to different salinity region for various specific activities likes spawning and other. Therefore, to understand the interactive impact of these whole range of two important environmental abiotic stresses (viz. pCO2 ranging from 380 μatm, 900 μatm and 1900 μatm, along with salinity gradients of 32ppt, 10 ppt and 2.5ppt) on the ecophysiologal performance of fish, we investigated various biological adaptive response in European sea bass (Dicentrarchus labrax), a model estuarine teleost. Overall, we hypothesize that effect of ocean acidification would be exacerbate with shift in ambient salinity. Oxygen consumption, ammonia metabolism, iono-osmoregulation, energy budget, ion-regulatory enzymes, hormones and pH amendments in plasma were assayed as the potential indices of compensatory responses.

Keywords: ocean acidification, sea bass, pH climate change, salinity

Procedia PDF Downloads 223
5511 STAT6 Mediates Local and Systemic Fibrosis and Type Ii Immune Response via Macrophage Polarization during Acute and Chronic Pancreatitis in Murine Model

Authors: Hager Elsheikh, Matthias Sendler, Juliana Glaubnitz

Abstract:

In pancreatitis, an inflammatory reaction occurs in the pancreatic secretory cells due to premature activation of proteases, leading to pancreatic self-digestion and necrotic cell death of acinar cells. Acute pancreatitis in patients is characterized by a severe immune reaction that could lead to serious complications, such as organ failure or septic shock, if left untreated. Chronic pancreatitis is a recurrence of episodes of acute pancreatitis resulting in a fibro-inflammatory immune response, in which the type 2 immune response is primarily driven by AAMs in the pancreas. One of the most important signaling pathways for M2 macrophage activation is the IL-4/STAT6 pathway. Pancreatic fibrosis is induced by the hyperactivation of pancreatic stellate cells by dysregulation in the inflammatory response, leading to further damage, autodigestion and possibly necrosis of pancreatic acinar cells. The aim of this research is to investigate the effect of STAT6 knockout in disease severity and development of fibrosis wound healing in the presence of different macrophage populations, regulated by the type 2 immune response, after inducing chronic and/or acute pancreatitis in mice models via cerulean injection. We further investigate the influence of the JAK/STAT6 signaling pathway on the balance of fibrosis and regeneration in STAT6 deficient and wild-type mice. The characterization of resident and recruited macrophages will provide insight into the influence of the JAK/STAT6 signaling pathway on infiltrating cells and, ultimately, tissue fibrosis and disease severity.

Keywords: acute and chronic pancreatitis, tissue regeneration, macrophage polarization, Gastroenterology

Procedia PDF Downloads 62
5510 Factors Affecting Employee Decision Making in an AI Environment

Authors: Yogesh C. Sharma, A. Seetharaman

Abstract:

The decision-making process in humans is a complicated system influenced by a variety of intrinsic and extrinsic factors. Human decisions have a ripple effect on subsequent decisions. In this study, the scope of human decision making is limited to employees. In an organisation, a person makes a variety of decisions from the time they are hired to the time they retire. The goal of this research is to identify various elements that influence decision-making. In addition, the environment in which a decision is made is a significant aspect of the decision-making process. Employees in today's workplace use artificial intelligence (AI) systems for automation and decision augmentation. The impact of AI systems on the decision-making process is examined in this study. This research is designed based on a systematic literature review. Based on gaps in the literature, limitations and the scope of future research have been identified. Based on these findings, a research framework has been designed to identify various factors affecting employee decision making. Employee decision making is influenced by technological advancement, data-driven culture, human trust, decision automation-augmentation, and workplace motivation. Hybrid human-AI systems require the development of new skill sets and organisational design. Employee psychological safety and supportive leadership influences overall job satisfaction.

Keywords: employee decision making, artificial intelligence (AI) environment, human trust, technology innovation, psychological safety

Procedia PDF Downloads 103
5509 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 119
5508 Investigating the performance of machine learning models on PM2.5 forecasts: A case study in the city of Thessaloniki

Authors: Alexandros Pournaras, Anastasia Papadopoulou, Serafim Kontos, Anastasios Karakostas

Abstract:

The air quality of modern cities is an important concern, as poor air quality contributes to human health and environmental issues. Reliable air quality forecasting has, thus, gained scientific and governmental attention as an essential tool that enables authorities to take proactive measures for public safety. In this study, the potential of Machine Learning (ML) models to forecast PM2.5 at local scale is investigated in the city of Thessaloniki, the second largest city in Greece, which has been struggling with the persistent issue of air pollution. ML models, with proven ability to address timeseries forecasting, are employed to predict the PM2.5 concentrations and the respective Air Quality Index 5-days ahead by learning from daily historical air quality and meteorological data from 2014 to 2016 and gathered from two stations with different land use characteristics in the urban fabric of Thessaloniki. The performance of the ML models on PM2.5 concentrations is evaluated with common statistical methods, such as R squared (r²) and Root Mean Squared Error (RMSE), utilizing a portion of the stations’ measurements as test set. A multi-categorical evaluation is utilized for the assessment of their performance on respective AQIs. Several conclusions were made from the experiments conducted. Experimenting on MLs’ configuration revealed a moderate effect of various parameters and training schemas on the model’s predictions. Their performance of all these models were found to produce satisfactory results on PM2.5 concentrations. In addition, their application on untrained stations showed that these models can perform well, indicating a generalized behavior. Moreover, their performance on AQI was even better, showing that the MLs can be used as predictors for AQI, which is the direct information provided to the general public.

Keywords: Air Quality, AQ Forecasting, AQI, Machine Learning, PM2.5

Procedia PDF Downloads 72
5507 A Folk Theorem with Public Randomization Device in Repeated Prisoner’s Dilemma under Costly Observation

Authors: Yoshifumi Hino

Abstract:

An infinitely repeated prisoner’s dilemma is a typical model that represents teamwork situation. If both players choose costly actions and contribute to the team, then both players are better off. However, each player has an incentive to choose a selfish action. We analyze the game under costly observation. Each player can observe the action of the opponent only when he pays an observation cost in that period. In reality, teamwork situations are often costly observation. Members of some teams sometimes work in distinct rooms, areas, or countries. In those cases, they have to spend their time and money to see other team members if they want to observe it. The costly observation assumption makes the cooperation difficult substantially because the equilibrium must satisfy the incentives not only on the action but also on the observational decision. Especially, it is the most difficult to cooperate each other when the stage-game is prisoner's dilemma because players have to communicate through only two actions. We examine whether or not players can cooperate each other in prisoner’s dilemma under costly observation. Specifically, we check whether symmetric Pareto efficient payoff vectors in repeated prisoner’s dilemma can be approximated by sequential equilibria or not (efficiency result). We show the efficiency result without any randomization device under certain circumstances. It means that players can cooperate with each other without any randomization device even if the observation is costly. Next, we assume that public randomization device is available, and then we show that any feasible and individual rational payoffs in prisoner’s dilemma can be approximated by sequential equilibria under a specific situation (folk theorem). It implies that players can achieve asymmetric teamwork like leadership situation when public randomization device is available.

Keywords: cost observation, efficiency, folk theorem, prisoner's dilemma, private monitoring, repeated games.

Procedia PDF Downloads 235
5506 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 58
5505 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 174
5504 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 80
5503 Income Inequality among Selected Entrepreneurs in Ondo State, Nigeria

Authors: O.O. Ehinmowo, A.I. Fatuase, D.F. Oke

Abstract:

Nigeria is endowed with resources that could boost the economy as well as generate income and provide jobs to the teaming populace. One of the keys of attaining this is by making the environment conducive for the entrepreneurs to excel in their respective enterprises so that more income could be accrued to the entrepreneurs. This study therefore examines income inequality among selected entrepreneurs in Ondo State, Nigeria using primary data. A multistage sampling technique was used to select 200 respondents for the study with the aid of structured questionnaire and personal interview. The data collected were subjected to descriptive statistics, Lorenz curve, Gini coefficient and Double - Log regression model. Results revealed that majority of the entrepreneurs (63%) were males and 90% were married with an average age of 44 years. About 40% of the respondents spent at most 12 years in school with 81% of the respondents had 4-6 members per household, while hair dressing (43.5%) and fashion designing (31.5%) were the most common enterprises among the sampled respondents. The findings also showed that majority of the entrepreneurs in hairdressing, fashion designing and laundry service earned below N200,000 per annum while the majority of those in restaurant and food vending earned between N400,000 – N600,000 followed by the entrepreneurs in pure water enterprise where majority earned N800,000 and above per annum. The result of the Gini coefficient (0.58) indicated that there was presence of inequality among the entrepreneurs which was also affirmed by the Lorenz curve. The Regression results showed that gender, household size and number of employees significantly affected the income of the entrepreneurs in the study area. Therefore, more female households should be encouraged into entrepreneurial businesses and government should give incentive cum conductive environment that could bridge the disparity in the income of the entrepreneurs in their various enterprises.

Keywords: entrepreneurs, Gini coefficient, income inequality, Lorenz curve

Procedia PDF Downloads 345
5502 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 58
5501 The Effects of Seasonal Variation on the Microbial-N Flow to the Small Intestine and Prediction of Feed Intake in Grazing Karayaka Sheep

Authors: Mustafa Salman, Nurcan Cetinkaya, Zehra Selcuk, Bugra Genc

Abstract:

The objectives of the present study were to estimate the microbial-N flow to the small intestine and to predict the digestible organic matter intake (DOMI) in grazing Karayaka sheep based on urinary excretion of purine derivatives (xanthine, hypoxanthine, uric acid, and allantoin) by the use of spot urine sampling under field conditions. In the trial, 10 Karayaka sheep from 2 to 3 years of age were used. The animals were grazed in a pasture for ten months and fed with concentrate and vetch plus oat hay for the other two months (January and February) indoors. Highly significant linear and cubic relationships (P<0.001) were found among months for purine derivatives index, purine derivatives excretion, purine derivatives absorption, microbial-N and DOMI. Through urine sampling and the determination of levels of excreted urinary PD and Purine Derivatives / Creatinine ratio (PDC index), microbial-N values were estimated and they indicated that the protein nutrition of the sheep was insufficient. In conclusion, the prediction of protein nutrition of sheep under the field conditions may be possible with the use of spot urine sampling, urinary excreted PD and PDC index. The mean purine derivative levels in spot urine samples from sheep were highest in June, July and October. Protein nutrition of pastured sheep may be affected by weather changes, including rainfall. Spot urine sampling may useful in modeling the feed consumption of pasturing sheep. However, further studies are required under different field conditions with different breeds of sheep to develop spot urine sampling as a model.

Keywords: Karayaka sheep, spot sampling, urinary purine derivatives, PDC index, microbial-N, feed intake

Procedia PDF Downloads 527
5500 Prospective Validation of the FibroTest Score in Assessing Liver Fibrosis in Hepatitis C Infection with Genotype 4

Authors: G. Shiha, S. Seif, W. Samir, K. Zalata

Abstract:

Prospective Validation of the FibroTest Score in assessing Liver Fibrosis in Hepatitis C Infection with Genotype 4 FibroTest (FT) is non-invasive score of liver fibrosis that combines the quantitative results of 5 serum biochemical markers (alpha-2-macroglobulin, haptoglobin, apolipoprotein A1, gamma glutamyl transpeptidase (GGT) and bilirubin) and adjusted with the patient's age and sex in a patented algorithm to generate a measure of fibrosis. FT has been validated in patients with chronic hepatitis C (CHC) (Halfon et al., Gastroenterol. Clin Biol.( 2008), 32 6suppl 1, 22-39). The validation of fibro test ( FT) in genotype IV is not well studied. Our aim was to evaluate the performance of FibroTest in an independent prospective cohort of hepatitis C patients with genotype 4. Subject was 122 patients with CHC. All liver biopsies were scored using METAVIR system. Our fibrosis score(FT) were measured, and the performance of the cut-off score were done using ROC curve. Among patients with advanced fibrosis, the FT was identically matched with the liver biopsy in 18.6%, overestimated the stage of fibrosis in 44.2% and underestimated the stage of fibrosis in 37.7% of cases. Also in patients with no/mild fibrosis, identical matching was detected in 39.2% of cases with overestimation in 48.1% and underestimation in 12.7%. So, the overall results of the test were identical matching, overestimation and underestimation in 32%, 46.7% and 21.3% respectively. Using ROC curve it was found that (FT) at the cut-off point of 0.555 could discriminate early from advanced stages of fibrosis with an area under ROC curve (AUC) of 0.72, sensitivity of 65%, specificity of 69%, PPV of 68%, NPV of 66% and accuracy of 67%. As FibroTest Score overestimates the stage of advanced fibrosis, it should not be considered as a reliable surrogate for liver biopsy in hepatitis C infection with genotype 4.

Keywords: fibrotest, chronic Hepatitis C, genotype 4, liver biopsy

Procedia PDF Downloads 411
5499 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 36
5498 Re-Inhabiting the Roof: Han Slawick Covered Roof Terrace, Amsterdam

Authors: Simone Medio

Abstract:

If we observe many modern cities from above, we are typically confronted with a sea of asphalt-clad flat rooftops. In contrast to the modernist expectation of a populated flat roof, flat rooftops in modern multi-story buildings are rarely used. On the contrary, they typify a desolate and abandoned landscape encouraging mechanical system allocation. Flat roof technology continues to be seen as a state-of-fact in most multi-storey building designs and its greening its prevalent environmental justification. This paper aims to seek a change in the approach to flat roofing. It makes a case for the opportunity at hand for architectonically resolute, sheltered, livable spaces that make a better use of the environment at rooftop level. The researcher is looking for the triggers that allow for that change to happen in the design process of case study buildings. The paper begins by exploring Han Slawick covered roof terrace in Amsterdam as a simple and essential example of transforming the flat roof in a usable, inhabitable space. It investigates the design challenges and the logistic, financial and legislative hurdles faced by the architect, and the outcomes in terms of building performance and occupant use and satisfaction. The researcher uses a grounded research methodology with direct interview process to the architect in charge of the building and the building user. Energy simulation tools and calculation of running costs are also used as further means of validating change.

Keywords: environmental design, flat rooftop persistence, roof re-habitation, tectonics

Procedia PDF Downloads 268
5497 Ata-Manobo Tribe as Stakeholders in the Making of School Improvement Plan: Basis for Policy Recommendation

Authors: Diobein C. Flores

Abstract:

The populace in Municipality of Talaingod is composed of Ata-Manobo. The said lumads enrich their culture, orientation and self because the place is a hive of their tribe. In lieu, the study would analyze the participation of the Ata-Manobo in the making of school improvement plan (SIP). Thus, it recommends alternative policy options that would help strengthen their involvement. The school stakeholders-Ata Manobo representatives from students, parent-teacher association, alumni, basic sector, municipal/barangay government unit, civic/social organizations and other government various agencies are the key participants in this study. The research used descriptive design. The responses of the representatives were analyzed through the criteria involved in employing Rational Model. The technical dimension, administrative, political acceptability and economic are the criteria in revealing decision. The policy alternative option 3- recommends to formulate policy for the purpose of capacitating stakeholders or governing council members in the making of SIP was pointed out as the most preferred option. This could strengthen the participation among Ata-Manobo as stakeholders in planning. Hence, the formulation alternative policy- capacitating stakeholders in the crafting of school improvement plan is recommended. The suggested initiative would assist the Department of Education in forging consensus across neighborhoods during the making of SIP. The appropriation of the definite budget to be used during the conduct of capability building activities is also suggested. Training-workshops are identified as possible intervention to ensure that the stakeholders are equipped with necessary knowledge and skills needed in the making of SIP. Indeed, the equal opportunities for all stakeholders regardless of their life circumstances must be noted. With the belief, people must be empowered to take advantage and spearhead progress in the making of SIP.

Keywords: Ata-Manobo Tribe, stakeholders, school improvement plan, Municipality of Talaingod, Philippines

Procedia PDF Downloads 320
5496 Application of Electrochemical Impedance Spectroscopy to Monitor the Steel/Soil Interface During Cathodic Protection of Steel in Simulated Soil Solution

Authors: Mandlenkosi George Robert Mahlobo, Tumelo Seadira, Major Melusi Mabuza, Peter Apata Olubambi

Abstract:

Cathodic protection (CP) has been widely considered a suitable technique for mitigating corrosion of buried metal structures. Plenty of efforts have been made in developing techniques, in particular non-destructive techniques, for monitoring and quantifying the effectiveness of CP to ensure the sustainability and performance of buried steel structures. The aim of this study was to investigate the evolution of the electrochemical processes at the steel/soil interface during the application of CP on steel in simulated soil. Carbon steel was subjected to electrochemical tests with NS4 solution used as simulated soil conditions for 4 days before applying CP for a further 11 days. A previously modified non-destructive voltammetry technique was applied before and after the application of CP to measure the corrosion rate. Electrochemical impedance spectroscopy (EIS), in combination with mathematical modeling through equivalent electric circuits, was applied to determine the electrochemical behavior at the steel/soil interface. The measured corrosion rate was found to have decreased from 410 µm/yr to 8 µm/yr between days 5 and 14 because of the applied CP. Equivalent electrical circuits were successfully constructed and used to adequately model the EIS results. The modeling of the obtained EIS results revealed the formation of corrosion products via a mixed activation-diffusion mechanism during the first 4 days, while the activation mechanism prevailed in the presence of CP, resulting in a protective film. The x-ray diffraction analysis confirmed the presence of corrosion products and the predominant protective film corresponding to the calcareous deposit.

Keywords: carbon steel, cathodic protection, NS4 solution, voltammetry, EIS

Procedia PDF Downloads 59
5495 Maternal and Neonatal Outcomes in Women Undergoing Bariatric Surgery: A Systematic Review and Meta-Analysis

Authors: Nicolas Galazis, Nikolina Docheva, Constantinos Simillis, Kypros Nicolaides

Abstract:

Background: Obese women are at increased risk for many pregnancy complications, and bariatric surgery (BS) before pregnancy has shown to improve some of these. Objectives: To review the current literature and quantitatively assess the obstetric and neonatal outcomes in pregnant women who have undergone BS. Search Strategy: MEDLINE, EMBASE and Cochrane databases were searched using relevant keywords to identify studies that reported on pregnancy outcomes after BS. Selection Criteria: Pregnancy outcome in firstly, women after BS compared to obese or BMI-matched women with no BS and secondly, women after BS compared to the same or different women before BS. Only observational studies were included. Data Collection and Analysis: Two investigators independently collected data on study characteristics and outcome measures of interest. These were analysed using the random effects model. Heterogeneity was assessed and sensitivity analysis was performed to account for publication bias. Main Results: The entry criteria were fulfilled by 17 non-randomised cohort or case-control studies, including seven with high methodological quality scores. In the BS group, compared to controls, there was a lower incidence of preeclampsia (OR, 0.45, 95% CI, 0.25-0.80; p=0.007), GDM (OR, 0.47, 95% CI, 0.40-0.56; P<0.001) and large neonates (OR 0.46, 95% CI 0.34-0.62; p<0.001) and a higher incidence of small neonates (OR 1.93, 95% CI 1.52-2.44; p<0.001), preterm birth (OR 1.31, 95% CI 1.08-1.58; p=0.006), admission for neonatal intensive care (OR 1.33, 95% CI 1.02-1.72; p=0.03) and maternal anaemia (OR 3.41, 95% CI 1.56-7.44, p=0.002). Conclusions: BS as a whole improves some pregnancy outcomes. Laparoscopic adjustable gastric banding does not appear to increase the rate of small neonates that was seen with other BS procedures. Obese women of childbearing age undergoing BS need to be aware of these outcomes.

Keywords: bariatric surgery, pregnancy, preeclampsia, gestational diabetes, birth weight

Procedia PDF Downloads 404
5494 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 327
5493 Assessment of Acute Cardiovascular Responses to Moderate and Vigorous Intensity Aerobic Exercises in Sedentary Adults and Amateur Athletes

Authors: Caner Yilmaz, Zuhal Didem Takinaci

Abstract:

Introduction: Today, our knowledge about the effects of physical activity performed at the different intensity of the cardiovascular system are still not clear. Therefore, to contribute to the literature, in our study, sedentary individuals and amateur athletes were assessed in a single session with the aim of evaluating the cardiovascular effects of the moderate and severe exercise. Methods: 80 participants (40 amateur athletes and 40 sedentary, young adults) participated in our study. Participants were divided into two groups: amateur athletes (mean age: 25.0 ± 3.6 yrs) and sedentary in group II (mean age: 23.8 ± 3.7 yrs). Participants in both groups were assessed twice, namely, firstly, at moderate intensity (5km/h 30 min. walking) and secondly at the vigorous intensity (8km/h 20 min. jogging). Participants’ SBP (Systolic Blood Pressure), DBP (Diastolic Blood Pressure), HR (Heart Rate), SpO₂ (Oxygen Saturation), BT (Body Temperature) and RR (Respiratory Rate) were measured. Results: In our study, the findings showed that after moderate-intensity aerobic exercise, change in SBP, DBP, and SpO₂ were significantly higher in Group II (p < 0.05). After the severe intensity aerobic exercises, change in SBP, SpO₂, HR, and RR was significantly higher in Group II (p < 0.05). The BORG score of Group II was significantly higher after both moderate and severe intensity aerobic exercise (p < 0.05). Conclusion: The cardiovascular responses of amateur athletes were closer to initial values, and the differences between the two groups were increased in direct proportion to the intensity of the exercise. Both exercise intensities could be adequate.

Keywords: aerobic, exercise, sedantary, cardi̇ovascular

Procedia PDF Downloads 287
5492 Governance of Energy Transitions in Developing States

Authors: Robert Lindner

Abstract:

In recent years a multitude of international efforts, including the United Nations’ aspirational 2030 Agenda for Sustainable Development, provided a new momentum to facilitate energy access and rural electrification projects to combat energy poverty in developing states in Asia. Rural electrification projects promise to facilitate other sustainable development aims, such as the empowerment of local communities through the creation of economic opportunities or increased disaster resilience. This study applies a multi-governance research framework to study the cases of the ongoing energy system transition in Myanmar and Cambodia. It explores what impact the international aid community, especially multilateral development banks and international development agencies, has on the governance of the transitions and how diverging aid donor interest shape policy making and project planning. The study is based on policy analysis and expert interviews, as well as extensive field research. It critically examines the current development trajectories and the strategies of the stakeholders involved. It concludes that institutional and technological competition between donors, as well as a lack of transparency and inclusion in the project planning and implementation phases, contributes to insufficient coordination in national energy policy making and project implementation at the local level. The study further discusses possible alternative approaches that might help to promote the spread of sustainable energy technologies.

Keywords: energy governance, developing countries, multi-level governance, energy transitions

Procedia PDF Downloads 105
5491 Design and Optimization of a Mini High Altitude Long Endurance (HALE) Multi-Role Unmanned Aerial Vehicle

Authors: Vishaal Subramanian, Annuatha Vinod Kumar, Santosh Kumar Budankayala, M. Senthil Kumar

Abstract:

This paper discusses the aerodynamic and structural design, simulation and optimization of a mini-High Altitude Long Endurance (HALE) UAV. The applications of this mini HALE UAV vary from aerial topological surveys, quick first aid supply, emergency medical blood transport, search and relief activates to border patrol, surveillance and estimation of forest fire progression. Although classified as a mini UAV according to UVS International, our design is an amalgamation of the features of ‘mini’ and ‘HALE’ categories, combining the light weight of the ‘mini’ and the high altitude ceiling and endurance of the HALE. Designed with the idea of implementation in India, it is in strict compliance with the UAS rules proposed by the office of the Director General of Civil Aviation. The plane can be completely automated or have partial override control and is equipped with an Infra-Red camera and a multi coloured camera with on-board storage or live telemetry, GPS system with Geo Fencing and fail safe measures. An additional of 1.5 kg payload can be attached to three major hard points on the aircraft and can comprise of delicate equipment or releasable payloads. The paper details the design, optimization process and the simulations performed using various software such as Design Foil, XFLR5, Solidworks and Ansys.

Keywords: aircraft, endurance, HALE, high altitude, long range, UAV, unmanned aerial vehicle

Procedia PDF Downloads 392
5490 Formulation and Optimization of Topical 5-Fluorouracil Microemulsions Using Central Compisite Design

Authors: Sudhir Kumar, V. R. Sinha

Abstract:

Water in oil topical microemulsions of 5-FU were developed and optimized using face centered central composite design. Topical w/o microemulsion of 5-FU were prepared using sorbitan monooleate (Span 80), polysorbate 80 (Tween 80), with different oils such as oleic acid (OA), triacetin (TA), and isopropyl myristate (IPM). The ternary phase diagrams designated the microemulsion region and face centered central composite design helped in determining the effects of selected variables viz. type of oil, smix ratio and water concentration on responses like drug content, globule size and viscosity of microemulsions. The CCD design exhibited that the factors have statistically significant effects (p<0.01) on the selected responses. The actual responses showed excellent agreement with the predicted values as suggested by the CCD with lower residual standard error. Similarly, the optimized values were found within the range as predicted by the model. Furthermore, other characteristics of microemulsions like pH, conductivity were investigated. For the optimized microemulsion batch, ex-vivo skin flux, skin irritation and retention studies were performed and compared with marketed 5-FU formulation. In ex vivo skin permeation studies, higher skin retention of drug and minimal flux was achieved for optimized microemulsion batch then the marketed cream. Results confirmed the actual responses to be in agreement with predicted ones with least residual standard errors. Controlled release of drug was achieved for the optimized batch with higher skin retention of 5-FU, which can further be utilized for the treatment of many dermatological disorders.

Keywords: 5-FU, central composite design, microemulsion, ternanry phase diagram

Procedia PDF Downloads 474
5489 Efficient Production of Cell-Adhesive Motif From Human Fibronectin Domains to Design a Bio-Functionalized Scaffold for Tissue Engineering

Authors: Amina Ben Abla, Sylvie Changotade, Geraldine Rohman, Guilhem Boeuf, Cyrine Dridi, Ahmed Elmarjou, Florence Dufour, Didier Lutomski, Abdellatif Elm’semi

Abstract:

Understanding cell adhesion and interaction with the extracellular matrix is essential for biomedical and biotechnological applications, including the development of biomaterials. In recent years, numerous biomaterials have emerged and were used in the field of tissue engineering. Nevertheless, the lack of interaction of biomaterials with cells still limits their bio-integration. Thus, the design of bioactive biomaterials to improve cell attachment and proliferation is of growing interest. In this study, bio-functionalized material was developed combining a synthetic polymer scaffold surface with selected domains of type III human fibronectin (FNIII-DOM) to promote cell adhesion and proliferation. Bioadhesive ligand includes cell-binding domains of human fibronectin, a major ECM protein that interacts with a variety of integrins cell-surface receptors, and ECM proteins through specific binding domains were engineered. FNIII-DOM was produced in bacterial system E. coli in 5L fermentor with a high yield level reaching 20mg/L. Bioactivity of the produced fragment was validated by studying cellular adhesion of human cells. The adsorption and immobilization of FNIII-DOM onto the polymer scaffold were evaluated in order to develop an innovative biomaterial.

Keywords: biomaterials, cellular adhesion, fibronectin, tissue engineering

Procedia PDF Downloads 141
5488 Mannosylated Oral Amphotericin B Nanocrystals for Macrophage Targeting: In vitro and Cell Uptake Studies

Authors: Rudra Vaghela, P. K. Kulkarni

Abstract:

The aim of the present research was to develop oral Amphotericin B (AmB) nanocrystals (Nc) grafted with suitable ligand in order to enhance drug transport across the intestinal epithelial barrier and subsequently, active uptake by macrophages. AmB Nc were prepared by liquid anti-solvent precipitation technique (LAS). Poloxamer 188 was used to stabilize the prepared AmB Nc and grafted with mannose for actively targeting M cells in Peyer’s patches. To prevent shedding of the stabilizer and ligand, N,N’-Dicyclohexylcarbodiimide (DCC) was used as a cross-linker. The prepared AmB Nc were characterized for particle size, PDI, zeta potential, X-ray diffraction (XRD) and surface morphology using scanning electron microscope (SEM) and evaluated for drug content, in vitro drug release and cell uptake studies using caco-2 cells. The particle size of stabilized AmB Nc grafted with WGA was in the range of 287-417 nm with negative zeta potential between -18 to -25 mV. XRD studies revealed crystalline nature of AmB Nc. SEM studies revealed that ungrafted AmB Nc were irregular in shape with rough surface whereas, grafted AmB Nc were found to be rod-shaped with smooth surface. In vitro drug release of AmB Nc was found to be 86% at the end of one hour. Cellular studies revealed higher invasion and uptake of AmB Nc towards caco-2 cell membrane when compared to ungrafted AmB Nc. Our findings emphasize scope on developing oral delivery system for passively targeting M cells in Peyer’s patches.

Keywords: leishmaniasis, amphotericin b nanocrystals, macrophage targeting, LAS technique

Procedia PDF Downloads 299
5487 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: climate change, downscaling, GCM, RCM

Procedia PDF Downloads 399
5486 Structure-Constructivism in the Philosophy of Mathematics

Authors: Jeansou Moun

Abstract:

This study argues that constructivism and structuralism, which have been the two important schools of mathematical philosophy since the mid-19th century, can and should be synthesized into structure-constructivism. In fact, the philosophy of mathematics is divided into more than ten schools depending on the point of view. However, the biggest trend is Platonism which claims that mathematical objects are "abstract entities" that exists independently of the human mind and material objects. Its opposite is constructivism. According to the latter, mathematical objects are products of the construction of the human mind. However, whether the basis of the construction is a logical device, a symbolic system, or an empirical perception, it is subdivided into logicism, formalism, and intuitionism. However, these three schools themselves are further subdivided into various variants, and among them, structuralism, which emerged in the mid-20th century, is receiving the most attention. On the other hand, structuralism which emphasizes structure instead of individual objects, is divided into non-eliminative structuralism, which supports the a priori of structure, and non-eliminative structuralism, which rejects any abstract entity. In this context, it is believed that the structure itself is not an a priori entity but a result of the construction of the cognitive subject and that no object has ever been given to us in its full meaning from the outset. In other words, concepts are progressively structured through a dialectical cycle between sensory perception, imagination (abstraction), concepts, judgments, and reasoning. Symbols are needed for formal operation. However, without concrete manipulation, the formal operation cannot have any meaning. However, when formal structurization is achieved, the reality (object) itself is also newly structured. This is the "structure-constructivism".

Keywords: philosophy of mathematics, platonism, logicism, formalism, constructivism, structuralism, structure-constructivism

Procedia PDF Downloads 92
5485 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 461
5484 The Correlation between Political Awareness and Political Participation for University Students’ “Applied Study”

Authors: Rana Mohamed

Abstract:

Despite youth in Egypt were away from political life for a long time; they are able to make a tangible difference in political status. Purpose: This exploratory study aims to determine whether and how much the prevailing political culture influence participatory behavior with a special focus on political awareness factors among university students in Egypt. Methodology: The study employed several data collection methods to ensure the validity of the results, quantitative and qualitative, verifying the positive relationships between the levels of political awareness and political participation and between political values in society and the level of political participation among university students. For achieving the objectives of the paper in the light of the pool of available literature and data, the study adopts system analysis method to apply input-output and conversions associated with the phenomena of political participation to analyze the different factors that have an effect upon the prevailing political culture and the patterns of values in Egyptian society. Findings: The result reveals that the level of political awareness and political participation for students were low, with a statistically significant relationship. In addition, the patterns of values in Egyptian culture significantly influence the levels of student participation. Therefore, the study recommends formulating policies that aim to increase awareness levels and integrate youth into the political process. Originality/Value: The importance of the academic study stems from addressing one of the central issues in political science; this study measures the change in the Egyptian patterns of culture and values among university students.

Keywords: political awareness, political participation, civic culture, citizenship, Egyptian universities, political knowledge

Procedia PDF Downloads 244
5483 Employees’ Perception of Organizational Communication in Oyo State Agricultural Development Programme (ADP), Nigeria

Authors: Michael Tunde Ajayi, Oluwakemi Enitan Fapojuwo

Abstract:

The study assessed employees’ perception of organizational communication in Oyo State Agricultural Development Programme and its effect on their job performance. A simple random sampling technique was used to select 120 employees using a structured questionnaire for data collection. Findings showed that 66.7% of the respondents were males and 60.4% were between the ages of 31-40 years. Most (87.5%) of the respondents had tertiary education and majority of the respondents (73.9%) had working experience of 5 years or less. Major perceived leadership styles used in communicating to the employees were that employees were not allowed to send feedbacks (X=3.23), information was usually inadequately passed across to the employees (X=2.52), information are given with explanation (X=2.04), leaders rarely gave information on innovation (X=1.91) and information are usually passed in form of order (X=1.89). However, majority (61.5%) of the respondents perceived that the common communication flow used is downward communication system. Respondents perceived that the effects of organizational communication on their job performance were that they were able to know the constraints within the organization (X= 4.89), solve the problem occurring in the organization (X=4.70) and achieve organization objectives (X= 4.40). However, major constraints affecting organizational communication were that there were no cordial relationship among workers (X=3.33), receivers had poor listening skills (X=3.32) and information were not in simple forms (X=3.29). There was a significant relationship between organizational communication (r= 0.984, p<0.05) and employees’ job performance. The study suggested that managers should encourage cordial relationship among workers in other to ease communication flow in organizations and also use adequate medium of communication in other to make information common within organizations.

Keywords: employees’ perception, organizational communication, effects, job performance

Procedia PDF Downloads 521