Search results for: shape prediction
3276 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley
Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara
Abstract:
The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.Keywords: landslide, intensity-duration, rainfall threshold, TRMM, slope, inventory, early warning system
Procedia PDF Downloads 2733275 Evaluation of the Analytic for Hemodynamic Instability as a Prediction Tool for Early Identification of Patient Deterioration
Authors: Bryce Benson, Sooin Lee, Ashwin Belle
Abstract:
Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.Keywords: clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring
Procedia PDF Downloads 1873274 Poly(Trimethylene Carbonate)/Poly(ε-Caprolactone) Phase-Separated Triblock Copolymers with Advanced Properties
Authors: Nikola Toshikj, Michel Ramonda, Sylvain Catrouillet, Jean-Jacques Robin, Sebastien Blanquer
Abstract:
Biodegradable and biocompatible block copolymers have risen as the golden materials in both medical and environmental applications. Moreover, if their architecture is of controlled manner, higher applications can be foreseen. In the meantime, organocatalytic ROP has been promoted as more rapid and immaculate route, compared to the traditional organometallic catalysis, towards efficient synthesis of block copolymer architectures. Therefore, herein we report novel organocatalytic pathway with guanidine molecules (TBD) for supported synthesis of trimethylene carbonate initiated by poly(caprolactone) as pre-polymer. Pristine PTMC-b-PCL-b-PTMC block copolymer structure, without any residual products and clear desired block proportions, was achieved under 1.5 hours at room temperature and verified by NMR spectroscopies and size-exclusion chromatography. Besides, when elaborating block copolymer films, further stability and amelioration of mechanical properties can be achieved via additional reticulation step of precedently methacrylated block copolymers. Subsequently, stimulated by the insufficient studies on the phase-separation/crystallinity relationship in these semi-crystalline block copolymer systems, their intrinsic thermal and morphology properties were investigated by differential scanning calorimetry and atomic force microscopy. Firstly, by DSC measurements, the block copolymers with χABN values superior to 20 presented two distinct glass transition temperatures, close to the ones of the respecting homopolymers, demonstrating an initial indication of a phase-separated system. In the interim, the existence of the crystalline phase was supported by the presence of melting temperature. As expected, the crystallinity driven phase-separated morphology predominated in the AFM analysis of the block copolymers. Neither crosslinking at melted state, hence creation of a dense polymer network, disturbed the crystallinity phenomena. However, the later revealed as sensible to rapid liquid nitrogen quenching directly from the melted state. Therefore, AFM analysis of liquid nitrogen quenched and crosslinked block copolymer films demonstrated a thermodynamically driven phase-separation clearly predominating over the originally crystalline one. These AFM films remained stable with their morphology unchanged even after 4 months at room temperature. However, as demonstrated by DSC analysis once rising the temperature above the melting temperature of the PCL block, neither the crosslinking nor the liquid nitrogen quenching shattered the semi-crystalline network, while the access to thermodynamical phase-separated structures was possible for temperatures under the poly (caprolactone) melting point. Precisely this coexistence of dual crosslinked/crystalline networks in the same copolymer structure allowed us to establish, for the first time, the shape-memory properties in such materials, as verified by thermomechanical analysis. Moreover, the response temperature to the material original shape depended on the block copolymer emplacement, hence PTMC or PCL as end-block. Therefore, it has been possible to reach a block copolymer with transition temperature around 40°C thus opening potential real-life medical applications. In conclusion, the initial study of phase-separation/crystallinity relationship in PTMC-b-PCL-b-PTMC block copolymers lead to the discovery of novel shape memory materials with superior properties, widely demanded in modern-life applications.Keywords: biodegradable block copolymers, organocatalytic ROP, self-assembly, shape-memory
Procedia PDF Downloads 1283273 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression
Authors: Wu Peng, Anders Liljerehn, Martin Magnevall
Abstract:
In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.Keywords: cutting force, kienzle model, predictive model, tool flank wear
Procedia PDF Downloads 1083272 Digital Twin for Retail Store Security
Authors: Rishi Agarwal
Abstract:
Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety
Procedia PDF Downloads 723271 Investigation of the Morphology of SiO2 Nano-Particles Using Different Synthesis Techniques
Authors: E. Gandomkar, S. Sabbaghi
Abstract:
In this paper, the effects of variation synthesized methods on morphology and size of silica nanostructure via modifying sol-gel and precipitation method have been investigated. Meanwhile, resulting products have been characterized by particle size analyzer, scanning electron microscopy (SEM), X-ray Diffraction (XRD) and Fourier transform infrared (FT-IR) spectra. As result, the shape of SiO2 with sol-gel and precipitation methods was spherical but with modifying sol-gel method we have been had nanolayer structure.Keywords: modified sol-gel, precipitation, nanolayer, Na2SiO3, nanoparticle
Procedia PDF Downloads 2923270 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe
Authors: Vipul M. Patel, Hemantkumar B. Mehta
Abstract:
Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant
Procedia PDF Downloads 2923269 Comparison of an Anthropomorphic PRESAGE® Dosimeter and Radiochromic Film with a Commercial Radiation Treatment Planning System for Breast IMRT: A Feasibility Study
Authors: Khalid Iqbal
Abstract:
This work presents a comparison of an anthropomorphic PRESAGE® dosimeter and radiochromic film measurements with a commercial treatment planning system to determine the feasibility of PRESAGE® for 3D dosimetry in breast IMRT. An anthropomorphic PRESAGE® phantom was created in the shape of a breast phantom. A five-field IMRT plan was generated with a commercially available treatment planning system and delivered to the PRESAGE® phantom. The anthropomorphic PRESAGE® was scanned with the Duke midsized optical CT scanner (DMOS-RPC) and the OD distribution was converted to dose. Comparisons were performed between the dose distribution calculated with the Pinnacle3 treatment planning system, PRESAGE®, and EBT2 film measurements. DVHs, gamma maps, and line profiles were used to evaluate the agreement. Gamma map comparisons showed that Pinnacle3 agreed with PRESAGE® as greater than 95% of comparison points for the PTV passed a ± 3%/± 3 mm criterion when the outer 8 mm of phantom data were discluded. Edge artifacts were observed in the optical CT reconstruction, from the surface to approximately 8 mm depth. These artifacts resulted in dose differences between Pinnacle3 and PRESAGE® of up to 5% between the surface and a depth of 8 mm and decreased with increasing depth in the phantom. Line profile comparisons between all three independent measurements yielded a maximum difference of 2% within the central 80% of the field width. For the breast IMRT plan studied, the Pinnacle3 calculations agreed with PRESAGE® measurements to within the ±3%/± 3 mm gamma criterion. This work demonstrates the feasibility of the PRESAGE® to be fashioned into anthropomorphic shape, and establishes the accuracy of Pinnacle3 for breast IMRT. Furthermore, these data have established the groundwork for future investigations into 3D dosimetry with more complex anthropomorphic phantoms.Keywords: 3D dosimetry, PRESAGE®, IMRT, QA, EBT2 GAFCHROMIC film
Procedia PDF Downloads 4163268 Influence of Distribution of Body Fat on Cholesterol Non-HDL and Its Effect on Kidney Filtration
Authors: Magdalena B. Kaziuk, Waldemar Kosiba
Abstract:
Background: In the XXI century we have to deal with the epidemic of obesity which is important risk factor for the cardiovascular and kidney diseases. Lipo proteins are directly involved in the atherosclerotic process. Non-high-density lipo protein (non-HDL) began following widespread recognition of its superiority over LDL as a measurement of vascular event risk. Non-HDL includes residual risk which persists in patients after achieved recommended level of LDL. Materials and Methods: The study covered 111 patients (52 females, 59 males, age 51,91±14 years), hospitalized on the intern department. Body composition was assessed using the bioimpendance method and anthropometric measurements. Physical activity data were collected during the interview. The nutritional status and the obesity type were determined with the Waist to Height Ratio and the Waist to Hip Ratio. A function of the kidney was evaluated by calculating the estimated glomerular filtration rate (eGFR) using MDRD formula. Non-HDL was calculated as a difference between concentration of the Total and HDL cholesterol. Results: 10% of patients were found to be underweight; 23.9 % had correct body weight; 15,08 % had overweight, while the remaining group had obesity: 51,02 %. People with the android shape have higher non-HDL cholesterol versus with the gynoid shape (p=0.003). The higher was non-HDL, the lower eGFR had studied subjects (p < 0.001). Significant correlation was found between high non-HDL and incorrect dietary habits in patients avoiding eating vegetables, fruits and having low physical activity (p < 0.005). Conclusions: Android type of figure raises the residual risk of the heart disease associated with higher levels of non-HDL. Increasing physical activity in these patients reduces the level of non-HDL. Non-HDL seems to be the best predictor among all cholesterol measures for the cardiovascular events and worsening eGFR.Keywords: obesity, non-HDL cholesterol, glomerular filtration rate, lifestyle
Procedia PDF Downloads 3733267 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization
Procedia PDF Downloads 1563266 On Voice in English: An Awareness Raising Attempt on Passive Voice
Authors: Meral Melek Unver
Abstract:
This paper aims to explore ways to help English as a Foreign Language (EFL) learners notice and revise voice in English and raise their awareness of when and how to use active and passive voice to convey meaning in their written and spoken work. Because passive voice is commonly preferred in certain genres such as academic essays and news reports, despite the current trends promoting active voice, it is essential for learners to be fully aware of the meaning, use and form of passive voice to better communicate. The participants in the study are 22 EFL learners taking a one-year intensive English course at a university, who will receive English medium education (EMI) in their departmental studies in the following academic year. Data from students’ written and oral work was collected over a four-week period and the misuse or inaccurate use of passive voice was identified. The analysis of the data proved that they failed to make sensible decisions about when and how to use passive voice partly because the differences between their mother tongue and English and because they were not aware of the fact that active and passive voice would not alternate all the time. To overcome this, a Test-Teach-Test shape lesson, as opposed to a Present-Practice-Produce shape lesson, was designed and implemented to raise their awareness of the decisions they needed to make in choosing the voice and help them notice the meaning and use of passive voice through concept checking questions. The results first suggested that awareness raising activities on the meaning and use of voice in English would be beneficial in having accurate and meaningful outcomes from students. Also, helping students notice and renotice passive voice through carefully designed activities would help them internalize the use and form of it. As a result of the study, a number of activities are suggested to revise and notice passive voice as well as a short questionnaire to help EFL teachers to self-reflect on their teaching.Keywords: voice in English, test-teach-test, passive voice, English language teaching
Procedia PDF Downloads 2213265 Offenders and Victims in Public Focus: Media Coverage about Crime and Its Consequences
Authors: Melanie Verhovnik
Abstract:
Media shape the image of crime, peoples’ believes, attitudes and sometimes also behaviors. Media not only gives the impression that crime is increasing, it also suggest that very violent crime is more common than it actually is. It is also no wonder that humans are more afraid of being involved in a crime committed by strangers than committed by somebody they know – because this is the media construct. With the help of three case studies, the paper analyzes how media frames crime and criminals and gives valuable hints as to what better reporting could look like.Keywords: court reporting, offenders in media, quantitative content analysis, victims in media
Procedia PDF Downloads 3853264 Nature of Body Image Distortion in Eating Disorders
Authors: Katri K. Cornelissen, Lise Gulli Brokjob, Kristofor McCarty, Jiri Gumancik, Martin J. Tovee, Piers L. Cornelissen
Abstract:
Recent research has shown that body size estimation of healthy women is driven by independent attitudinal and perceptual components. The attitudinal component represents psychological concerns about body, coupled to low self-esteem and a tendency towards depressive symptomatology, leading to over-estimation of body size, independent of the Body Mass Index (BMI) someone actually has. The perceptual component is a normal bias known as contraction bias, which, for bodies is dependent on actual BMI. Women with a BMI less than the population norm tend to overestimate their size, while women with a BMI greater than the population norm tend to underestimate their size. Women whose BMI is close to the population mean are most accurate. This is indexed by a regression of estimated BMI on actual BMI with a slope less than one. It is well established that body dissatisfaction, i.e. an attitudinal distortion, leads to body size overestimation in eating disordered individuals. However, debate persists as to whether women with eating disorders may also suffer a perceptual body distortion. Therefore, the current study was set to ask whether women with eating disorders exhibit the normal contraction bias when they estimate their own body size. If they do not, this would suggest differences in the way that women with eating disorders process the perceptual aspects of body shape and size in comparison to healthy controls. 100 healthy controls and 33 women with a history of eating disorders were recruited. Critically, it was ensured that both groups of participants represented comparable and adequate ranges of actual BMI (e.g. ~18 to ~40). Of those with eating disorders, 19 had a history of anorexia nervosa, 6 bulimia nervosa, and 8 OSFED. 87.5% of the women with a history of eating disorders self-reported that they were either recovered or recovering, and 89.7% of them self-reported that they had had one or more instances of relapse. The mean time lapsed since first diagnosis was 5 years and on average participants had experienced two relapses. Participants were asked to fill number of psychometric measures (EDE-Q, BSQ, RSE, BDI) to establish the attitudinal component of their body image as well as their tendency to internalize socio-cultural body ideals. Additionally, participants completed a method of adjustment psychophysical task, using photorealistic avatars calibrated for BMI, in order to provide an estimate of their own body size and shape. The data from the healthy controls replicate previous findings, revealing independent contributions to body size estimation from both attitudinal and perceptual (i.e. contraction bias) body image components, as described above. For the eating disorder group, once the adequacy of their actual BMI ranges was established, a regression of estimated BMI on actual BMI had a slope greater than 1, significantly different to that from controls. This suggests that (some) eating disordered individuals process the perceptual aspects of body image differently from healthy controls. It therefore is necessary to develop interventions which are specific to the perceptual processing of body shape and size for the management of (some) individuals with eating disorders.Keywords: body image distortion, perception, recovery, relapse, BMI, eating disorders
Procedia PDF Downloads 683263 Effect of Volute Tongue Shape and Position on Performance of Turbo Machinery Compressor
Authors: Anuj Srivastava, Kuldeep Kumar
Abstract:
This paper proposes a numerical study of volute tongue design, which affects the centrifugal compressor operating range and pressure recovery. Increased efficiency has been the traditional importance of compressor design. However, the increased operating range has become important in an age of ever-increasing productivity and energy costs in the turbomachinery industry. Efficiency and overall operating range are the two most important parameters studied to evaluate the aerodynamic performance of centrifugal compressor. Volute is one of the components that have significant effect on these two parameters. Choice of volute tongue geometry has major role in compressor performance, also affects performance map. The author evaluates the trade-off on using pull-back tongue geometry on centrifugal compressor performance. In present paper, three different tongue positions and shapes are discussed. These designs are compared in terms of pressure recovery coefficient, pressure loss coefficient, and stable operating range. The detailed flow structures for various volute geometries and pull back angle near tongue are studied extensively to explore the fluid behavior. The viscous Navier-Stokes equations are used to simulate the flow inside the volute. The numerical calculations are compared with thermodynamic 1-D calculations. Author concludes that the increment in compression ratio accompanies with more uniform pressure distribution in the modified tongue shape and location, a uniform static pressure around the circumferential which build a more uniform flow in the impeller and diffuser. Also, the blockage at the tongue of the volute was causing circumferentially nonuniformed pressure along the volute. This nonuniformity may lead impeller and diffuser to operate unstably. However, it is not the volute that directly controls the stall.Keywords: centrifugal compressor volute, tongue geometry, pull-back, compressor performance, flow instability
Procedia PDF Downloads 1633262 A Framework on Data and Remote Sensing for Humanitarian Logistics
Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini
Abstract:
Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making
Procedia PDF Downloads 3793261 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data
Authors: Natalia Feruleva
Abstract:
The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data
Procedia PDF Downloads 1193260 Predictability of Kiremt Rainfall Variability over the Northern Highlands of Ethiopia on Dekadal and Monthly Time Scales Using Global Sea Surface Temperature
Authors: Kibrom Hadush
Abstract:
Countries like Ethiopia, whose economy is mainly rain-fed dependent agriculture, are highly vulnerable to climate variability and weather extremes. Sub-seasonal (monthly) and dekadal forecasts are hence critical for crop production and water resource management. Therefore, this paper was conducted to study the predictability and variability of Kiremt rainfall over the northern half of Ethiopia on monthly and dekadal time scales in association with global Sea Surface Temperature (SST) at different lag time. Trends in rainfall have been analyzed on annual, seasonal (Kiremt), monthly, and dekadal (June–September) time scales based on rainfall records of 36 meteorological stations distributed across four homogenous zones of the northern half of Ethiopia for the period 1992–2017. The results from the progressive Mann–Kendall trend test and the Sen’s slope method shows that there is no significant trend in the annual, Kiremt, monthly and dekadal rainfall total at most of the station's studies. Moreover, the rainfall in the study area varies spatially and temporally, and the distribution of the rainfall pattern increases from the northeast rift valley to northwest highlands. Methods of analysis include graphical correlation and multiple linear regression model are employed to investigate the association between the global SSTs and Kiremt rainfall over the homogeneous rainfall zones and to predict monthly and dekadal (June-September) rainfall using SST predictors. The results of this study show that in general, SST in the equatorial Pacific Ocean is the main source of the predictive skill of the Kiremt rainfall variability over the northern half of Ethiopia. The regional SSTs in the Atlantic and the Indian Ocean as well contribute to the Kiremt rainfall variability over the study area. Moreover, the result of the correlation analysis showed that the decline of monthly and dekadal Kiremt rainfall over most of the homogeneous zones of the study area are caused by the corresponding persistent warming of the SST in the eastern and central equatorial Pacific Ocean during the period 1992 - 2017. It is also found that the monthly and dekadal Kiremt rainfall over the northern, northwestern highlands and northeastern lowlands of Ethiopia are positively correlated with the SST in the western equatorial Pacific, eastern and tropical northern the Atlantic Ocean. Furthermore, the SSTs in the western equatorial Pacific and Indian Oceans are positively correlated to the Kiremt season rainfall in the northeastern highlands. Overall, the results showed that the prediction models using combined SSTs at various ocean regions (equatorial and tropical) performed reasonably well in the prediction (With R2 ranging from 30% to 65%) of monthly and dekadal rainfall and recommends it can be used for efficient prediction of Kiremt rainfall over the study area to aid with systematic and informed decision making within the agricultural sector.Keywords: dekadal, Kiremt rainfall, monthly, Northern Ethiopia, sea surface temperature
Procedia PDF Downloads 1413259 Transformation of Hexagonal Cells into Auxetic in Core Honeycomb Furniture Panels
Authors: Jerzy Smardzewski
Abstract:
Structures with negative Poisson's ratios are called auxetic. They are characterized by better mechanical properties than conventional structures, especially shear strength, the ability to better absorb energy and increase strength during bending, especially in sandwich panels. Commonly used paper cores of cellular boards are made of hexagonal cells. With isotropic facings, these cells provide isotropic properties of the entire furniture board. Shelves made of such panels with a thickness similar to standard chipboards do not provide adequate stiffness and strength of the furniture. However, it is possible to transform the shape of hexagonal cells into polyhedral auxetic cells that improve the mechanical properties of the core. The work aimed to transform the hexagonal cells of the paper core into auxetic cells and determine their basic mechanical properties. Using numerical methods, it was decided to design the most favorable proportions of cells distinguished by the lowest Poisson's ratio and the highest modulus of linear elasticity. Standard cores for cellular boards commonly used to produce 34 mm thick furniture boards were used for the tests. Poisson's ratios, bending strength, and linear elasticity moduli were determined for such cores and boards. Then, the cells were transformed into auxetic structures, and analogous cellular boards were made for which mechanical properties were determined. The results of numerical simulations for which the variable parameters were the dimensions of the cell walls, wall inclination angles, and relative cell density were presented in the further part of the paper. Experimental tests and numerical simulations showed the beneficial effect of auxeticization on the mechanical quality of furniture panels. They allowed for the selection of the optimal shape of auxetic core cells.Keywords: auxetics, honeycomb, panels, simulation, experiment
Procedia PDF Downloads 123258 Proinflammatory Response of Agglomerated TiO2 Nanoparticles in Human-Immune Cells
Authors: Vaiyapuri Subbarayn Periasamy, Jegan Athinarayanan, Ali A. Alshatwi
Abstract:
The widespread use of Titanium oxide nanoparticles (TiO2-NPs), now are found with different physic-chemical properties (size, shape, chemical properties, agglomeration, etc.) in many processed foods, agricultural chemicals, biomedical products, food packaging and food contact materials, personal care products, and other consumer products used in daily life. Growing evidences have been highlighted that there are risks of physico-chemical properties dependent toxicity with special attention to “TiO2-NPs and human immune system”. Unfortunately, agglomeration and aggregation have frequently been ignored in immuno-toxicological studies, even though agglomeration and aggregation would be expected to affect nanotoxicity since it changes the size, shape, surface area, and other properties of the TiO2-NPs. In this present investigation, we assessed the immune toxic effect of TiO2-NPs on human immune cells Total WBC including Lymphocytes (T cells (CD3+), T helper cells (CD3+, CD4+), Suppressor/cytotoxic T cells (CD3+/CD8+) and NK cells (CD3-/CD16+ and CD56+), Monocytes (CD14+, CD3-) and B lymphocytes (CD19+, CD3-) in order to find the immunological response (IL1A, IL1B, IL2 IL-4, IL5 IL-6, IL-10, IL-12, IL-13, IFN-γ, TGF-β, and TNF-a) and redox gene regulation (TNF, p53, BCl-2, CAT, GSTA4, TNF, CYP1A, POR, SOD1, GSTM3, GPX1, and GSR1)-linking physicochemical properties with special reference to agglomeration of TiO2-NPs. Our findings suggest that TiO2-NPs altered cytokine production, enhanced phagocytic indexing, metabolic stress through specific immune regulatory- genes expression in different WBC subsets and may contribute to pro-inflammatory response. Although TiO2-NPs have great advantages in the personal care products, biomedical, food and agricultural products, its chronic and acute immune-toxicity still need to be assessed carefully with special reference to food and environmental safety.Keywords: TiO2 nanoparticles, oxidative stress, cytokine, human immune cells
Procedia PDF Downloads 3973257 Design of Sustainable Concrete Pavement by Incorporating RAP Aggregates
Authors: Selvam M., Vadthya Poornachandar, Surender Singh
Abstract:
These Reclaimed Asphalt Pavement (RAP) aggregates are generally dumped in the open area after the demolition of Asphalt Pavements. The utilization of RAP aggregates in cement concrete pavements may provide several socio-economic-environmental benefits and could embrace the circular economy. The cross recycling of RAP aggregates in the concrete pavement could reduce the consumption of virgin aggregates and saves the fertile land. However, the structural, as well as functional properties of RAP-concrete could be significantly lower than the conventional Pavement Quality Control (PQC) pavements. This warrants judicious selection of RAP fraction (coarse and fine aggregates) along with the accurate proportion of the same for PQC highways. Also, the selection of the RAP fraction and its proportion shall not be solely based on the mechanical properties of RAP-concrete specimens but also governed by the structural and functional behavior of the pavement system. In this study, an effort has been made to predict the optimum RAP fraction and its corresponding proportion for cement concrete pavements by considering the low-volume and high-volume roads. Initially, the effect of inclusions of RAP on the fresh and mechanical properties of concrete pavement mixes is mapped through an extensive literature survey. Almost all the studies available to date are considered for this study. Generally, Indian Roads Congress (IRC) methods are the most widely used design method in India for the analysis of concrete pavements, and the same has been considered for this study. Subsequently, fatigue damage analysis is performed to evaluate the required safe thickness of pavement slab for different fractions of RAP (coarse RAP). Consequently, the performance of RAP-concrete is predicted by employing the AASHTO-1993 model for the following distresses conditions: faulting, cracking, and smoothness. The performance prediction and total cost analysis of RAP aggregates depict that the optimum proportions of coarse RAP aggregates in the PQC mix are 35% and 50% for high volume and low volume roads, respectively.Keywords: concrete pavement, RAP aggregate, performance prediction, pavement design
Procedia PDF Downloads 1583256 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1063255 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects
Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes
Abstract:
Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction
Procedia PDF Downloads 1493254 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 7283253 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 893252 UV-Cured Thiol-ene Based Polymeric Phase Change Materials for Thermal Energy Storage
Authors: M. Vezir Kahraman, Emre Basturk
Abstract:
Energy storage technology offers new ways to meet the demand to obtain efficient and reliable energy storage materials. Thermal energy storage systems provide the potential to acquire energy savings, which in return decrease the environmental impact related to energy usage. For this purpose, phase change materials (PCMs) that work as 'latent heat storage units' which can store or release large amounts of energy are preferred. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. PCMs have found different application areas such as solar energy storage and transfer, HVAC (Heating, Ventilating and Air Conditioning) systems, thermal comfort in vehicles, passive cooling, temperature controlled distributions, industrial waste heat recovery, under floor heating systems and modified fabrics in textiles. Ultraviolet (UV)-curing technology has many advantages, which made it applicable in many different fields. Low energy consumption, high speed, room-temperature operation, low processing costs, high chemical stability, and being environmental friendly are some of its main benefits. UV-curing technique has many applications. One of the many advantages of UV-cured PCMs is that they prevent the interior PCMs from leaking. Shape-stabilized PCM is prepared by blending the PCM with a supporting material, usually polymers. In our study, this problem is minimized by coating the fatty alcohols with a photo-cross-linked thiol-ene based polymeric system. Leakage is minimized because photo-cross-linked polymer acts a matrix. The aim of this study is to introduce a novel thiol-ene based shape-stabilized PCM. Photo-crosslinked thiol-ene based polymers containing fatty alcohols were prepared and characterized for the purpose of phase change materials (PCMs). Different types of fatty alcohols were used in order to investigate their properties as shape-stable PCMs. The structure of the PCMs was confirmed by ATR-FTIR techniques. The phase transition behaviors, thermal stability of the prepared photo-crosslinked PCMs were investigated by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). This work was supported by Marmara University, Commission of Scientific Research Project.Keywords: differential scanning calorimetry (DSC), Polymeric phase change material, thermal energy storage, UV-curing
Procedia PDF Downloads 2283251 Determination of Unsaturated Soil Permeability Based on Geometric Factor Development of Constant Discharge Model
Authors: A. Rifa’i, Y. Takeshita, M. Komatsu
Abstract:
After Yogyakarta earthquake in 2006, the main problem that occurred in the first yard of Prambanan Temple is ponding area that occurred after rainfall. Soil characterization needs to be determined by conducting several processes, especially permeability coefficient (k) in both saturated and unsaturated conditions to solve this problem. More accurate and efficient field testing procedure is required to obtain permeability data that present the field condition. One of the field permeability test equipment is Constant Discharge procedure to determine the permeability coefficient. Necessary adjustments of the Constant Discharge procedure are needed to be determined especially the value of geometric factor (F) to improve the corresponding value of permeability coefficient. The value of k will be correlated with the value of volumetric water content (θ) of an unsaturated condition until saturated condition. The principle procedure of Constant Discharge model provides a constant flow in permeameter tube that flows into the ground until the water level in the tube becomes constant. Constant water level in the tube is highly dependent on the tube dimension. Every tube dimension has a shape factor called the geometric factor that affects the result of the test. Geometric factor value is defined as the characteristic of shape and radius of the tube. This research has modified the geometric factor parameters by using empty material tube method so that the geometric factor will change. Saturation level is monitored by using soil moisture sensor. The field test results were compared with the results of laboratory tests to validate the results of the test. Field and laboratory test results of empty tube material method have an average difference of 3.33 x 10-4 cm/sec. The test results showed that modified geometric factor provides more accurate data. The improved methods of constant discharge procedure provide more relevant results.Keywords: constant discharge, geometric factor, permeability coefficient, unsaturated soils
Procedia PDF Downloads 2943250 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 923249 A Novel Epitope Prediction for Vaccine Designing against Ebola Viral Envelope Proteins
Authors: Manju Kanu, Subrata Sinha, Surabhi Johari
Abstract:
Viral proteins of Ebola viruses belong to one of the best studied viruses; however no effective prevention against EBOV has been developed. Epitope-based vaccines provide a new strategy for prophylactic and therapeutic application of pathogen-specific immunity. A critical requirement of this strategy is the identification and selection of T-cell epitopes that act as vaccine targets. This study describes current methodologies for the selection process, with Ebola virus as a model system. Hence great challenge in the field of ebola virus research is to design universal vaccine. A combination of publicly available bioinformatics algorithms and computational tools are used to screen and select antigen sequences as potential T-cell epitopes of supertypes Human Leukocyte Antigen (HLA) alleles. MUSCLE and MOTIF tools were used to find out most conserved peptide sequences of viral proteins. Immunoinformatics tools were used for prediction of immunogenic peptides of viral proteins in zaire strains of Ebola virus. Putative epitopes for viral proteins (VP) were predicted from conserved peptide sequences of VP. Three tools NetCTL 1.2, BIMAS and Syfpeithi were used to predict the Class I putative epitopes while three tools, ProPred, IEDB-SMM-align and NetMHCII 2.2 were used to predict the Class II putative epitopes. B cell epitopes were predicted by BCPREDS 1.0. Immunogenic peptides were identified and selected manually by putative epitopes predicted from online tools individually for both MHC classes. Finally sequences of predicted peptides for both MHC classes were looked for common region which was selected as common immunogenic peptide. The immunogenic peptides were found for viral proteins of Ebola virus: epitopes FLESGAVKY, SSLAKHGEY. These predicted peptides could be promising candidates to be used as target for vaccine design.Keywords: epitope, b cell, immunogenicity, ebola
Procedia PDF Downloads 3143248 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition
Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria
Abstract:
Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses
Procedia PDF Downloads 1563247 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 241