Search results for: return prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3148

Search results for: return prediction

628 Exploring Hydrogen Embrittlement and Fatigue Crack Growth in API 5L X52 Steel Pipeline Under Cyclic Internal Pressure

Authors: Omar Bouledroua, Djamel Zelmati, Zahreddine Hafsi, Milos B. Djukic

Abstract:

Transporting hydrogen gas through the existing natural gas pipeline network offers an efficient solution for energy storage and conveyance. Hydrogen generated from excess renewable electricity can be conveyed through the API 5L steel-made pipelines that already exist. In recent years, there has been a growing demand for the transportation of hydrogen through existing gas pipelines. Therefore, numerical and experimental tests are required to verify and ensure the mechanical integrity of the API 5L steel pipelines that will be used for pressurized hydrogen transportation. Internal pressure loading is likely to accelerate hydrogen diffusion through the internal pipe wall and consequently accentuate the hydrogen embrittlement of steel pipelines. Furthermore, pre-cracked pipelines are susceptible to quick failure, mainly under a time-dependent cyclic pressure loading that drives fatigue crack propagation. Meanwhile, after several loading cycles, the initial cracks will propagate to a critical size. At this point, the remaining service life of the pipeline can be estimated, and inspection intervals can be determined. This paper focuses on the hydrogen embrittlement of API 5L steel-made pipeline under cyclic pressure loading. Pressurized hydrogen gas is transported through a network of pipelines where demands at consumption nodes vary periodically. The resulting pressure profile over time is considered a cyclic loading on the internal wall of a pre-cracked pipeline made of API 5L steel-grade material. Numerical modeling has allowed the prediction of fatigue crack evolution and estimation of the remaining service life of the pipeline. The developed methodology in this paper is based on the ASME B31.12 standard, which outlines the guidelines for hydrogen pipelines.

Keywords: hydrogen embrittlement, pipelines, transient flow, cyclic pressure, fatigue crack growth

Procedia PDF Downloads 83
627 Off-Shore Wind Turbines: The Issue of Soil Plugging during Pile Installation

Authors: Mauro Iannazzone, Carmine D'Agostino

Abstract:

Off-shore wind turbines are currently considered as a reliable source of renewable energy Worldwide and especially in the UK. Most of the operational off-shore wind turbines located in shallow waters (i.e. < 30 m) are supported on monopiles. Monopiles are open-ended steel tubes with diameter ranging between 4 to 6 m. It is expected that future off-shore wind farms will be located in water depths as high as 70 m. Therefore, alternative foundation arrangements are needed. Foundations for off-shore structures normally consist of open-ended piles driven into the soil by means of impact hammers. During pile installation, the soil inside the pile may be mobilized by the increasing shear strength such as to prevent more soil from entering the pile. This phenomenon is known as soil plugging, and represents an important issue as it may change significantly the driving resistance of open-ended piles. In fact, if the plugging formation is unexpected, the installation may require more powerful and more expensive hammers. Engineers need to estimate whether the driven pile will be installed in a plugged or unplugged mode. As a consequence, a prediction of the degree of soil plugging is required in order to correctly predict the drivability of the pile. This work presents a brief review of the state-of-the-art of pile driving and approaches used to predict formation of soil plugs. In addition, a novel analytical approach is proposed, which is based on the vertical equilibrium of a plugged pile. Differently from previous studies, this research takes into account the enhancement of the stress within the soil plug. Finally, the work presents and discusses a series of experimental tests, which are carried out on small-scale models piles to validate the analytical solution.

Keywords: off-shore wind turbines, pile installation, soil plugging, wind energy

Procedia PDF Downloads 310
626 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 391
625 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model

Authors: Didier Auroux, Vladimir Groza

Abstract:

This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.

Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization

Procedia PDF Downloads 313
624 Quoting Jobshops Due Dates Subject to Exogenous Factors in Developing Nations

Authors: Idris M. Olatunde, Kareem B.

Abstract:

In manufacturing systems, especially job shops, service performance is a key factor that determines customer satisfaction. Service performance depends not only on the quality of the output but on the delivery lead times as well. Besides product quality enhancement, delivery lead time must be minimized for optimal patronage. Quoting accurate due dates is sine quo non for job shop operational survival in a global competitive environment. Quoting accurate due dates in job shops has been a herculean task that nearly defiled solutions from many methods employed due to complex jobs routing nature of the system. This class of NP-hard problems possessed no rigid algorithms that can give an optimal solution. Jobshop operational problem is more complex in developing nations due to some peculiar factors. Operational complexity in job shops emanated from political instability, poor economy, technological know-how, and the non-promising socio-political environment. The mentioned exogenous factors were hardly considered in the previous studies on scheduling problem related to due date determination in job shops. This study has filled the gap created in the past studies by developing a dynamic model that incorporated the exogenous factors for accurate determination of due dates for varying jobs complexity. Real data from six job shops selected from the different part of Nigeria, were used to test the efficacy of the model, and the outcomes were analyzed statistically. The results of the analyzes showed that the model is more promising in determining accurate due dates than the traditional models deployed by many job shops in terms of patronage and lead times minimization.

Keywords: due dates prediction, improved performance, customer satisfaction, dynamic model, exogenous factors, job shops

Procedia PDF Downloads 410
623 Design, Synthesis and Pharmacological Investigation of Novel 2-Phenazinamine Derivatives as a Mutant BCR-ABL (T315I) Inhibitor

Authors: Gajanan M. Sonwane

Abstract:

Nowadays, the entire pharmaceutical industry is facing the challenge of increasing efficiency and innovation. The major hurdles are the growing cost of research and development and a concurrent stagnating number of new chemical entities (NCEs). Hence, the challenge is to select the most druggable targets and to search the equivalent drug-like compounds, which also possess specific pharmacokinetic and toxicological properties that allow them to be developed as drugs. The present research work includes the studies of developing new anticancer heterocycles by using molecular modeling techniques. The heterocycles synthesized through such methodology are much effective as various physicochemical parameters have been already studied and the structure has been optimized for its best fit in the receptor. Hence, on the basis of the literature survey and considering the need to develop newer anticancer agents, new phenazinamine derivatives were designed by subjecting the nucleus to molecular modeling, viz., GQSAR analysis and docking studies. Simultaneously, these designed derivatives were subjected to in silico prediction of biological activity through PASS studies and then in silico toxicity risk assessment studies. In PASS studies, it was found that all the derivatives exhibited a good spectrum of biological activities confirming its anticancer potential. The toxicity risk assessment studies revealed that all the derivatives obey Lipinski’s rule. Amongst these series, compounds 4c, 5b and 6c were found to possess logP and drug-likeness values comparable with the standard Imatinib (used for anticancer activity studies) and also with the standard drug methotrexate (used for antimitotic activity studies). One of the most notable mutations is the threonine to isoleucine mutation at codon 315 (T315I), which is known to be resistant to all currently available TKI. Enzyme assay planned for confirmation of target selective activity.

Keywords: drug design, tyrosine kinases, anticancer, Phenazinamine

Procedia PDF Downloads 113
622 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 82
621 Optimizing the Window Geometry Using Fractals

Authors: K. Geetha Ramesh, A. Ramachandraiah

Abstract:

In an internal building space, daylight becomes a powerful source of illumination. The challenge therefore, is to develop means of utilizing both direct and diffuse natural light in buildings while maintaining and improving occupant's visual comfort, particularly at greater distances from the windows throwing daylight. The geometrical features of windows in a building have significant effect in providing daylight. The main goal of this research is to develop an innovative window geometry, which will effectively provide the daylight component adequately together with internal reflected component(IRC) and also the external reflected component(ERC), if any. This involves exploration of a light redirecting system using fractal geometry for windows, in order to penetrate and distribute daylight more uniformly to greater depths, minimizing heat gain and glare, and also to reduce building energy use substantially. Of late the creation of fractal geometrical window and the occurrence of daylight illuminance due to such windows is becoming an interesting study. The amount of daylight can change significantly based on the window geometry and sky conditions. This leads to the (i) exploration of various fractal patterns suitable for window designs, and (ii) quantification of the effect of chosen fractal window based on the relationship between the fractal pattern, size, orientation and glazing properties for optimizing daylighting. There are a lot of natural lighting applications able to predict the behaviour of a light in a room through a traditional opening - a regular window. The conventional prediction methodology involves the evaluation of the daylight factor, the internal reflected component and the external reflected component. Having evaluated the daylight illuminance level for a conventional window, the technical performance of a fractal window for an optimal daylighting is to be studied and compared with that of a regular window. The methodologies involved are highlighted in this paper.

Keywords: daylighting, fractal geometry, fractal window, optimization

Procedia PDF Downloads 296
620 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 85
619 Executive Order as an Effective Tool in Combating Insecurities and Human Rights Violations: The Case of the Special Anti-Robbery Squad and Youths in Nigeria

Authors: Cita Ayeni

Abstract:

Following countless violations of Human Rights in Nigeria by the various arms and agencies of government; from the Military to the Federal Police and other law enforcement agencies, Nigeria has been riddled with several reports of acts by these agencies against the citizens, ranging from illegal arrest and imprisonment, torture, disappearing, and extrajudicial killings, just to mention a few. This paper, focuses on SARS (Special Anti-Robbery Squad), a division of the Nigeria Police Force, and its reported threats to the people’s security, particularly the Nigerian youths, with continuous violence, extortion, illegal arrest and imprisonment, terror, and extrajudicial activities resulting in maiming and in most cases death, thus infringing on the human rights of the people it’s sworn to protect. This research further analyses how the activities of SARS has over the years instigated fear on the average Nigerian youth, preventing the free participation in daily life, education, job, and individual development, in turn impeding the realization of their full potentials for growth and participation in collective national development. This research analyzes the executive order by the then Acting President (Vice-President) of Nigeria, directing the overhauling of SARS, and its implementation by the Federal Police Force in determining if it’s enough to prevent or put a stop to the continuous Human Rights abuse and threat to the security of the individual citizen. Concluding that although the order by the Acting President was given with an intent to halt the various violations by SARS, and the Inspector General of Police’s (IGP) subsequent action by releasing a statement following the order, the bureaucracy in Nigeria, with a history of incompetency and a return to 'business as usual' after a reduced public outcry, it’s most likely that there will not be adequate follow up put in place and these violations would be slowly 'swept under the rug' with SARS officials not held accountable. It is recommended therefore that the Federal Government through the NPF, following the reforms made, in collaboration with the mentioned Independent Human Rights and civil societies organizations should periodically produce unbiased and publicly accessible reports on the implementation of these reforms and progress made. This will go a long way in assuring the public of actual fulfillment of the restructuring, reduce fear by the youths and restore some public faith in the government.

Keywords: special anti-robbery squad, youths in Nigeria, overhaul, insecurities, human rights violations

Procedia PDF Downloads 300
618 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation

Procedia PDF Downloads 289
617 Analog Railway Signal Object Controller Development

Authors: Ercan Kızılay, Mustafa Demi̇rel, Selçuk Coşkun

Abstract:

Railway signaling systems consist of vital products that regulate railway traffic and provide safe route arrangements and maneuvers of trains. SIL 4 signal lamps are produced by many manufacturers today. There is a need for systems that enable these signal lamps to be controlled by commands from the interlocking. These systems should act as fail-safe and give error indications to the interlocking system when an unexpected situation occurs for the safe operation of railway systems from the RAMS perspective. In the past, driving and proving the lamp in relay-based systems was typically done via signaling relays. Today, the proving of lamps is done by comparing the current values read over the return circuit, the lower and upper threshold values. The purpose is an analog electronic object controller with the possibility of easy integration with vital systems and the signal lamp itself. During the study, the EN50126 standard approach was considered, and the concept, definition, risk analysis, requirements, architecture, design, and prototyping were performed throughout this study. FMEA (Failure Modes and Effects Analysis) and FTA (Fault Tree) Analysis) have been used for safety analysis in accordance with EN 50129. Concerning these analyzes, the 1oo2D reactive fail-safe hardware design of a controller has been researched. Electromagnetic compatibility (EMC) effects on the functional safety of equipment, insulation coordination, and over-voltage protection were discussed during hardware design according to EN 50124 and EN 50122 standards. As vital equipment for railway signaling, railway signal object controllers should be developed according to EN 50126 and EN 50129 standards which identify the steps and requirements of the development in accordance with the SIL 4(Safety Integrity Level) target. In conclusion of this study, an analog railway signal object controller, which takes command from the interlocking system, is processed in driver cards. Driver cards arrange the voltage level according to desired visibility by means of semiconductors. Additionally, prover cards evaluate the current upper and lower thresholds. Evaluated values are processed via logic gates which are composed as 1oo2D by means of analog electronic technologies. This logic evaluates the voltage level of the lamp and mitigates the risks of undue dimming.

Keywords: object controller, railway electronic, analog electronic, safety, railway signal

Procedia PDF Downloads 92
616 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 72
615 Gait Analysis in Total Knee Arthroplasty

Authors: Neeraj Vij, Christian Leber, Kenneth Schmidt

Abstract:

Introduction: Total knee arthroplasty is a common procedure. It is well known that the biomechanics of the knee do not fully return to their normal state. Motion analysis has been used to study the biomechanics of the knee after total knee arthroplasty. The purpose of this scoping review is to summarize the current use of gait analysis in total knee arthroplasty and to identify the preoperative motion analysis parameters for which a systematic review aimed at determining the reliability and validity may be warranted. Materials and Methods: This IRB-exempt scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist strictly. Five search engines were searched for a total of 279 articles. Articles underwent a title and abstract screening process followed by full-text screening. Included articles were placed in the following sections: the role of gait analysis as a research tool for operative decisions, other research applications for motion analysis in total knee arthroplasty, gait analysis as a tool in predicting radiologic outcomes, gait analysis as a tool in predicting clinical outcomes. Results: Eleven articles studied gait analysis as a research tool in studying operative decisions. Motion analysis is currently used to study surgical approaches, surgical techniques, and implant choice. Five articles studied other research applications for motion analysis in total knee arthroplasty. Other research applications for motion analysis currently include studying the role of the unicompartmental knee arthroplasty and novel physical therapy protocols aimed at optimizing post-operative care. Two articles studied motion analysis as a tool for predicting radiographic outcomes. Preoperative gait analysis has identified parameters than can predict postoperative tibial component migration. 15 articles studied motion analysis in conjunction with clinical scores. Conclusions: There is a broad range of applications within the research domain of total knee arthroplasty. The potential application is likely larger. However, the current literature is limited by vague definitions of ‘gait analysis’ or ‘motion analysis’ and a limited number of articles with preoperative and postoperative functional and clinical measures. Knee adduction moment, knee adduction impulse, total knee range of motion, varus angle, cadence, stride length, and velocity have the potential for integration into composite clinical scores. A systematic review aimed at determining the validity, reliability, sensitivities, and specificities of these variables is warranted.

Keywords: motion analysis, joint replacement, patient-reported outcomes, knee surgery

Procedia PDF Downloads 88
614 Correlation between Neck Circumference and Other Anthropometric Indices as a Predictor of Obesity

Authors: Madhur Verma, Meena Rajput, Kamal Kishore

Abstract:

Background: The general view that obesity is a problem of prosperous Western countries has been repealed with substantial evidence showing that middle-income countries like India are now at the heart of a fat explosion. Neck circumference has evolved as a promising index to measure obesity, because of the convenience of its use, even in culture sensitive population. Objectives: To determine whether neck circumference (NC) was associated with overweight and obesity and contributed to the prediction like other classical anthropometric indices. Methodology: Cross-sectional study consisting of 1080 adults (> 19 years) selected through Multi-stage random sampling between August 2013 and September 2014 using the pretested semi-structured questionnaire. After recruitment, the demographic and anthropometric parameters [BMI, Waist & Hip Circumference (WC, HC), Waist to hip ratio (WHR), waist to height ratio (WHtR), body fat percentage (BF %), neck circumference (NC)] were recorded & calculated as per standard procedures. Analysis was done using appropriate statistical tests. (SPSS, version 21.) Results: Mean age of study participants was 44.55+15.65 years. Overall prevalence of overweight & obesity as per modified criteria for Asian Indians (BMI ≥ 23 kg/m2) was 49.62% (Females-51.48%; Males-47.77%). Also, number of participants having high WHR, WHtR, BF%, WC & NC was 827(76.57%), 530(49.07%), 513(47.5%), 537(49.72%) & 376(34.81%) respectively. Variation of NC, BMI & BF% with age was non- significant. In both the genders, as per the Pearson’s correlational analysis, neck circumference was positively correlated with BMI (men, r=0.670 {p < 0.05}; women, r=0.564 {p < 0.05}), BF% (men, r=0.407 {p < 0.05}; women, r= 0.283 {p < 0.05}), WC (men, r=0.598{p < 0.05}; women, r=0.615 {p < 0.05}), HC (men, r=0.512{p < 0.05}; women, r=0.523{p < 0.05}), WHR (men, r= 0.380{p > 0.05}; women, r=0.022{p > 0.05}) & WHtR (men, r=0.318 {p < 0.05}; women, r=0.396{p < 0.05}). On ROC analysis, NC showed good discriminatory power to identify obesity with AUC (AUC for males: 0.822 & females: 0.873; p- value < 0.001) with maximum sensitivity and specificity at a cut-off value of 36.55 cms for males & 34.05cms for females. Conclusion: NC has fair validity as a community-based screener for overweight and obese individuals in the study context and has also correlated well with other classical indices.

Keywords: neck circumference, obesity, anthropometric indices, body fat percentage

Procedia PDF Downloads 246
613 In silico Subtractive Genomics Approach for Identification of Strain-Specific Putative Drug Targets among Hypothetical Proteins of Drug-Resistant Klebsiella pneumoniae Strain 825795-1

Authors: Umairah Natasya Binti Mohd Omeershffudin, Suresh Kumar

Abstract:

Klebsiella pneumoniae, a Gram-negative enteric bacterium that causes nosocomial and urinary tract infections. Particular concern is the global emergence of multidrug-resistant (MDR) strains of Klebsiella pneumoniae. Characterization of antibiotic resistance determinants at the genomic level plays a critical role in understanding, and potentially controlling, the spread of multidrug-resistant (MDR) pathogens. In this study, drug-resistant Klebsiella pneumoniae strain 825795-1 was investigated with extensive computational approaches aimed at identifying novel drug targets among hypothetical proteins. We have analyzed 1099 hypothetical proteins available in genome. We have used in-silico genome subtraction methodology to design potential and pathogen-specific drug targets against Klebsiella pneumoniae. We employed bioinformatics tools to subtract the strain-specific paralogous and host-specific homologous sequences from the bacterial proteome. The sorted 645 proteins were further refined to identify the essential genes in the pathogenic bacterium using the database of essential genes (DEG). We found 135 unique essential proteins in the target proteome that could be utilized as novel targets to design newer drugs. Further, we identified 49 cytoplasmic protein as potential drug targets through sub-cellular localization prediction. Further, we investigated these proteins in the DrugBank databases, and 11 of the unique essential proteins showed druggability according to the FDA approved drug bank databases with diverse broad-spectrum property. The results of this study will facilitate discovery of new drugs against Klebsiella pneumoniae.

Keywords: pneumonia, drug target, hypothetical protein, subtractive genomics

Procedia PDF Downloads 170
612 Anaerobic Co-Digestion of Pressmud with Bagasse and Animal Waste for Biogas Production Potential

Authors: Samita Sondhi, Sachin Kumar, Chirag Chopra

Abstract:

The increase in population has resulted in an excessive feedstock production, which has in return lead to the accumulation of a large amount of waste from different resources as crop residues, industrial waste and solid municipal waste. This situation has raised the problem of waste disposal in present days. A parallel problem of depletion of natural fossil fuel resources has led to the formation of alternative sources of energy from the waste of different industries to concurrently resolve the two issues. The biogas is a carbon neutral fuel which has applications in transportation, heating and power generation. India is a nation that has an agriculture-based economy and agro-residues are a significant source of organic waste. Taking into account, the second largest agro-based industry that is sugarcane industry producing a high quantity of sugar and sugarcane waste byproducts such as Bagasse, Press Mud, Vinasse and Wastewater. Currently, there are not such efficient disposal methods adopted at large scales. According to manageability objectives, anaerobic digestion can be considered as a method to treat organic wastes. Press mud is lignocellulosic biomass and cannot be accumulated for Mono digestion because of its complexity. Prior investigations indicated that it has a potential for production of biogas. But because of its biological and elemental complexity, Mono-digestion was not successful. Due to the imbalance in the C/N ratio and presence of wax in it can be utilized with any other fibrous material hence will be digested properly under suitable conditions. In the first batch of Mono-digestion of Pressmud biogas production was low. Now, co-digestion of Pressmud with Bagasse which has desired C/N ratio will be performed to optimize the ratio for maximum biogas from Press mud. In addition, with respect to supportability, the main considerations are the monetary estimation of item result and ecological concerns. The work is designed in such a way that the waste from the sugar industry will be digested for maximum biogas generation and digestive after digestion will be characterized for its use as a bio-fertilizer for soil conditioning. Due to effectiveness demonstrated by studied setups of Mono-digestion and Co-digestion, this approach can be considered as a viable alternative for lignocellulosic waste disposal and in agricultural applications. Biogas produced from the Pressmud either can be used for Powerhouses or transportation. In addition, the work initiated towards the development of waste disposal for energy production will demonstrate balanced economy sustainability of the process development.

Keywords: anaerobic digestion, carbon neutral fuel, press mud, lignocellulosic biomass

Procedia PDF Downloads 165
611 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models

Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru

Abstract:

Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.

Keywords: maize, stem borers, density, RapidEye, GLM

Procedia PDF Downloads 494
610 Participation of Women in the Brazilian Paralympic Sports

Authors: Ana Carolina Felizardo Da Silva

Abstract:

People with disabilities are those who have limitations of a physical, mental, intellectual or sensory nature and who, therefore, should not be excluded or marginalized. In Brazil, the Brazilian Law for the Inclusion of People with Disabilities defines that people with disabilities have the right to culture, sport, tourism and leisure on an equal basis with other people. Sport for people with disabilities, in its genesis, had a character aimed at rehabilitating men and soldiers, that is, the male figure who returned wounded from war and needed care. By gaining practitioners, the marketing issue emerges and, successively, high performance, what we call Paralympic sport. We found that sport for people with disabilities was designed for men, corroborating the social idea that sport is a masculine and masculinizing environment. In this way, the inclusion of women with disabilities in sports becomes a double challenge because they are women and have a disability. From data collected from official documents of the International Paralympic Committee, it is found that the first report on the participation of women in the Paralympic Games was in 1948, in England, in Stoke Mandeville, a championship considered the firstborn of the games, later, became called the “Paralympic Games”. However, due to the lack of information, the return of the appearance of women participating in the Paralympics took place after long 40 years, in 1984, which demonstrates a large gap of records on the official website referring to women in the games. Despite the great challenge, the number of women has been growing substantially. When collecting data from participants of all 16 editions of the Paralympic Games, in its last edition, held in Tokyo, out of 4,400 competing athletes, 1,853 were women, which represents 42% of the total number of athletes. In this same edition, we had the largest delegation of Brazilian women, represented by 96 athletes out of a total of 260 Brazilian athletes. It is estimated that in the next edition, to be taken place in Paris in 2024, the participation of women will equal or surpass that of men. The certain invisibility of women participating in the Paralympic Games is noticed when we access the database of the Brazilian Paralympic Committee website. It is possible to identify all women medalists of a given edition. On the other side, participating female athletes who did not medal are not registered on the site. Regarding the participation of Brazilian women in the Paralympics, there was a considerable growth in the last two editions, in 2012 there were only 69 women participating, going to 102 in 2016 and 96 in 2021. The same happened in relation to the medalists, going from 8 Brazilians in 2012 to 33 in 2016 and 27 in 2021. In this sense, the present study, aims to analyze how Brazilian women participate in the Paralympics, giving visibility and voice to female athletes. Structured interviews are being carried out with the participants of the games, identifying the difficulties and potentialities of participating with athletes in the competition. The analysis will be carried out through Bardin’s content analysis.

Keywords: paralympics, sport for people with disabilities, woman, woman in sport

Procedia PDF Downloads 69
609 Linear Decoding Applied to V5/MT Neuronal Activity on Past Trials Predicts Current Sensory Choices

Authors: Ben Hadj Hassen Sameh, Gaillard Corentin, Andrew Parker, Kristine Krug

Abstract:

Perceptual decisions about sequences of sensory stimuli often show serial dependence. The behavioural choice on one trial is often affected by the choice on previous trials. We investigated whether the neuronal signals in extrastriate visual area V5/MT on preceding trials might influence choice on the current trial and thereby reveal the neuronal mechanisms of sequential choice effects. We analysed data from 30 single neurons recorded from V5/MT in three Rhesus monkeys making sequential choices about the direction of rotation of a three-dimensional cylinder. We focused exclusively on the responses of neurons that showed significant choice-related firing (mean choice probability =0.73) while the monkey viewed perceptually ambiguous stimuli. Application of a wavelet transform to the choice-related firing revealed differences in the frequency band of neuronal activity that depended on whether the previous trial resulted in a correct choice for an unambiguous stimulus that was in the neuron’s preferred direction (low alpha and high beta and gamma) or non-preferred direction (high alpha and low beta and gamma). To probe this in further detail, we applied a regularized linear decoder to predict the choice for an ambiguous trial by referencing the neuronal activity of the preceding unambiguous trial. Neuronal activity on a previous trial provided a significant prediction of the current choice (61% correc, 95%Cl~52%t), even when limiting analysis to preceding trials that were correct and rewarded. These findings provide a potential neuronal signature of sequential choice effects in the primate visual cortex.

Keywords: perception, decision making, attention, decoding, visual system

Procedia PDF Downloads 131
608 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation

Authors: Samuel Ahamefula Mba

Abstract:

Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.

Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation

Procedia PDF Downloads 90
607 Rohingya Problem and the Impending Crisis: Outcome of Deliberate Denial of Citizenship Status and Prejudiced Refugee Laws in South East Asia

Authors: Priyal Sepaha

Abstract:

A refugee crisis is manifested by challenges, both for the refugees and the asylum giving state. The situation turns into a mega-crisis when the situation is prejudicially handled by the home state, inappropriate refugee laws, exploding refugee population, and above all, no hope of any foreseeable solution or remedy. This paper studies the impact on the capability of stateless Rohingyas to migrate and seek refuge due to the enforcement of rigid criteria of movement imposed both by Myanmar as well as the adjoining countries in the name of national security. This theoretical study identifies the issues and the key factors and players which have precipitated the crisis. It further discusses the possible ramifications in the home, asylum giving, and the adjoining countries for not discharging their roles aptly. Additionally, an attempt has been made to understand the scarce response given to the impending crisis by the regional organizations like SAARC, ASEAN and CHOGAM as well as international organizations like United Nations Human Rights Council, Security Council, Office of High Commissioner for Refugees and so on, in the name of inadequacy of monetary funds and physical resources. Based on the refugee laws and practices pertaining to the case of Rohingyas, this paper analyses that the Rohingya Crisis is in dire need of an effective action plan to curb and resolve the biggest humanitarian crisis situation of the century. This mounting human tragedy can be mitigated permanently, by strengthening existing and creating new interdependencies among all stakeholders, as further ignorance can drive the countries of the Indian Sub-continent, in particular, and South East Asia, by and large into a violent civil war for seizing long-awaited civil rights by the marginalized Rohingyas. To curb this mass crisis, it will require the application of coercive pressure and diplomatic pursuance on the home country to acknowledge the rights of its fleeing citizens. This further necessitates mustering adequate monetary funds and physical resources for the asylum providing state. Additional challenges such as devising mechanisms for the refugee’s safe return, comprehensive planning for their holistic economic development and rehabilitation plan are needed. These, however, can only come into effect with a conscious strive by the regional and international community to fulfil their assigned role.

Keywords: asylum, citizenship, crisis, humanitarian, human rights, refugee, rohingya

Procedia PDF Downloads 131
606 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis

Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath

Abstract:

The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.

Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression

Procedia PDF Downloads 193
605 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic

Authors: Michael Lousis

Abstract:

The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.

Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors

Procedia PDF Downloads 314
604 A 3D Cell-Based Biosensor for Real-Time and Non-Invasive Monitoring of 3D Cell Viability and Drug Screening

Authors: Yuxiang Pan, Yong Qiu, Chenlei Gu, Ping Wang

Abstract:

In the past decade, three-dimensional (3D) tumor cell models have attracted increasing interest in the field of drug screening due to their great advantages in simulating more accurately the heterogeneous tumor behavior in vivo. Drug sensitivity testing based on 3D tumor cell models can provide more reliable in vivo efficacy prediction. The gold standard fluorescence staining is hard to achieve the real-time and label-free monitoring of the viability of 3D tumor cell models. In this study, micro-groove impedance sensor (MGIS) was specially developed for dynamic and non-invasive monitoring of 3D cell viability. 3D tumor cells were trapped in the micro-grooves with opposite gold electrodes for the in-situ impedance measurement. The change of live cell number would cause inversely proportional change to the impedance magnitude of the entire cell/matrigel to construct and reflect the proliferation and apoptosis of 3D cells. It was confirmed that 3D cell viability detected by the MGIS platform is highly consistent with the standard live/dead staining. Furthermore, the accuracy of MGIS platform was demonstrated quantitatively using 3D lung cancer model and sophisticated drug sensitivity testing. In addition, the parameters of micro-groove impedance chip processing and measurement experiments were optimized in details. The results demonstrated that the MGIS and 3D cell-based biosensor and would be a promising platform to improve the efficiency and accuracy of cell-based anti-cancer drug screening in vitro.

Keywords: micro-groove impedance sensor, 3D cell-based biosensors, 3D cell viability, micro-electromechanical systems

Procedia PDF Downloads 126
603 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria

Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi

Abstract:

The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.

Keywords: CRU, climate change, precipitation, SPI, temperature

Procedia PDF Downloads 83
602 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves

Authors: Hanifeh Imanian, Morteza Kolahdoozan

Abstract:

The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.

Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill

Procedia PDF Downloads 230
601 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm

Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar

Abstract:

The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations

Procedia PDF Downloads 412
600 Study on The Pile Height Loss of Tunisian Handmade Carpets Under Dynamic Loading

Authors: Fatma Abidi, Taoufik Harizi, Slah Msahli, Faouzi Sakli

Abstract:

Nine different Tunisian handmade carpets were used for the investigation. The raw material of the carpet pile yarns was wool. The influence of the different structure parameters (linear density and pile height) on the carpet compression was investigated. Carpets were tested under dynamic loading in order to evaluate and observe the thickness loss and carpet behavior under dynamic loads. To determine the loss of pile height under dynamic loading, the pile height carpets were measured. The test method was treated according to the Tunisian standard NT 12.165 (corresponds to the standard ISO 2094). The pile height measurements are taken and recorded at intervals up to 1000 impacts (measures of this study were made after 50, 100, 200, 500, and 1000 impacts). The loss of pile height is calculated using the variation between the initial height and those measured after the number of reported impacts. The experimental results were statistically evaluated using Design Expert Analysis of Variance (ANOVA) software. As regards the deformation, results showed that both of the structure parameters of the pile yarn and the pile height have an influence. The carpet with the higher pile and the less linear density of pile yarn showed the worst performance. Results of a polynomial regression analysis are highlighted. There is a good correlation between the loss of pile height and the impacts number of dynamic loads. These equations are in good agreement with measured data. Because the prediction is reasonably accurate for all samples, these equations can also be taken into account when calculating the theoretical loss of pile height for the considered carpet samples. Statistical evaluations of the experimen¬tal data showed that the pile material and number of impacts have a significant effect on mean thickness and thickness loss variations.

Keywords: Tunisian handmade carpet, loss of pile height, dynamic loads, performance

Procedia PDF Downloads 317
599 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 262