Search results for: lead time reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24318

Search results for: lead time reduction

20298 The Fight against Terrorist Radicalization: A French Perspective

Authors: Julia Burchett

Abstract:

After France became the target of an increasing number of terrorist attacks committed by people who have been declared ‘radicalized’, the issue of radicalization has become the main component of the national Action Plan for the Prevention of terrorism, thus stressing the need to address the roots causes of this peril. Therefore, the aim of this research paper is to provide a preliminary review of Frances’s strategy in the fight against terrorist radicalization in order to point out the challenges posed by this phenomenon while also highlighting its contemporary version and the understanding the results. In this regard, it should not be forgotten that the process of radicalization does not always lead to a terrorist act. To this end, the French legal framework that applies to radicalization coupled with the judicial response provided by the National Court will be analyzed in the light of the need for a balance between the concern for security and the protection of fundamental freedoms.

Keywords: criminal law, France, fundamental freedoms, radicalization, terrorism

Procedia PDF Downloads 422
20297 A Nanoindentation Study of Thin Film Prepared by Physical Vapor Deposition

Authors: Dhiflaoui Hafedh, Khlifi Kaouther, Ben Cheikh Larbi Ahmed

Abstract:

Monolayer and multilayer coatings of CrN and AlCrN deposited on 100Cr6 (AISI 52100) substrate by PVD magnetron sputtering system. The micro structures of the coatings were characterized using atomic force microscopy (AFM). The AFM analysis revealed the presence of domes and craters which are uniformly distributed over all surfaces of the various layers. Nano indentation measurement of CrN coating showed maximum hardness (H) and modulus (E) of 14 GPa and 240 GPa, respectively. The measured H and E values of AlCrN coatings were found to be 30 GPa and 382 GPa, respectively. The improved hardness in both the coatings was attributed mainly to a reduction in crystallite size and decrease in surface roughness. The incorporation of Al into the CrN coatings has improved both hardness and Young’s modulus.

Keywords: CrN, AlCrN coatings, hardness, nanoindentation

Procedia PDF Downloads 551
20296 Development and Nutritional Evaluation of Sorghum Flour-Based Crackers Enriched with Bioactive Tomato Processing Residue

Authors: Liana Claudia Salanță, Anca Corina Fărcaș

Abstract:

Valorization of agro-industrial by-products offers significant economic and environmental advantages. This study investigates the transformation of tomato processing residues into value-added products, contributing to waste reduction and promoting a circular, sustainable economy. Specifically, the development of sorghum flour-based crackers enriched with tomato waste powder targets the dietary requirements of individuals with celiac disease and diabetes, evaluating their nutritional and sensory properties. Tomato residues were obtained from Roma-Spania tomatoes and processed into powder through drying and grinding. The bioactive compounds, including carotenoids, lycopene, and polyphenols, were quantified using established analytical methods. Formulation of the crackers involved optimizing the incorporation of tomato powder into sorghum flour. Subsequently, their nutritional and sensory attributes were assessed. The tomato waste powder demonstrated considerable bioactive potential, with total carotenoid content measured at 66 mg/100g, lycopene at 52.61 mg/100g, and total polyphenols at 463.60 mg GAE/100g. Additionally, the crackers with a 30% powder addition exhibited the highest concentration of polyphenols. Consequently, this sample also demonstrated a high antioxidant activity of 15.04% inhibition of DPPH radicals. Nutritionally, the crackers showed a 30% increase in fiber content and a 25% increase in protein content compared to standard gluten-free products. Sensory evaluation indicated positive consumer acceptance, with an average score of 8 out of 10 for taste and 7.5 out of 10 for color, attributed to the natural pigments from tomato waste. This innovative approach highlights the potential of tomato by-products in creating nutritionally enhanced gluten-free foods. Future research should explore the long-term stability of these bioactive compounds in finished products and evaluate the scalability of this process for industrial applications. Integrating such sustainable practices can significantly contribute to waste reduction and the development of functional foods.

Keywords: tomato waste, circular economy, bioactive compounds, sustainability, health benefits

Procedia PDF Downloads 10
20295 Studies on Affecting Factors of Wheel Slip and Odometry Error on Real-Time of Wheeled Mobile Robots: A Review

Authors: D. Vidhyaprakash, A. Elango

Abstract:

In real-time applications, wheeled mobile robots are increasingly used and operated in extreme and diverse conditions traversing challenging surfaces such as a pitted, uneven terrain, natural flat, smooth terrain, as well as wet and dry surfaces. In order to accomplish such tasks, it is critical that the motion control functions without wheel slip and odometry error during the navigation of the two-wheeled mobile robot (WMR). Wheel slip and odometry error are disrupting factors on overall WMR performance in the form of deviation from desired trajectory, navigation, travel time and budgeted energy consumption. The wheeled mobile robot’s ability to operate at peak performance on various work surfaces without wheel slippage and odometry error is directly connected to four main parameters, which are the range of payload distribution, speed, wheel diameter, and wheel width. This paper analyses the effects of those parameters on overall performance and is concerned with determining the ideal range of parameters for optimum performance.

Keywords: wheeled mobile robot, terrain, wheel slippage, odometryerror, trajectory

Procedia PDF Downloads 274
20294 Estimation of Noise Barriers for Arterial Roads of Delhi

Authors: Sourabh Jain, Parul Madan

Abstract:

Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.

Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers

Procedia PDF Downloads 126
20293 Dealing the Disruptive Behaviour amongst Students with Autism through Circus

Authors: K. A. Razhiyah

Abstract:

Disruptive behavior is a problem that is usually associated with those with autism. There is a need to overcome this behavioral problem because the negative impact of this problem does not only effect the social relation of the students but also can cause uneasiness to those around them. This condition will be worse if the techniques used failed to motivate students to change the behaviour. The purpose of this study was to determine the effect of the circus games technique on the disruptive behavior amongst students with autism. The positive results of the intervention that was carried out for three months show the reduction in disruptive behaviour, and also improvement in the turn-taking and focusing ability aspect. Positive changes shown by the students had an encouraging effect and in a way are helping them in the teaching and learning process.

Keywords: autism, desruptive behaviour, circus, effect

Procedia PDF Downloads 235
20292 Lamb Wave-Based Blood Coagulation Measurement System Using Citrated Plasma

Authors: Hyunjoo Choi, Jeonghun Nam, Chae Seung Lim

Abstract:

Acoustomicrofluidics has gained much attention due to the advantages, such as noninvasiveness and easy integration with other miniaturized systems, for clinical and biological applications. However, a limitation of acoustomicrofluidics is the complicated and costly fabrication process of electrodes. In this study, we propose a low-cost and lithography-free device using Lamb wave for blood analysis. Using a Lamb wave, calcium ion-removed blood plasma and coagulation reagents can be rapidly mixed for blood coagulation test. Due to the coagulation process, the viscosity of the sample increases and the viscosity change can be monitored by internal acoustic streaming of microparticles suspended in the sample droplet. When the acoustic streaming of particles stops by the viscosity increase is defined as the coagulation time. With the addition of calcium ion at 0-25 mM, the coagulation time was measured and compared with the conventional index for blood coagulation analysis, prothrombin time, which showed highly correlated with the correlation coefficient as 0.94. Therefore, our simple and cost-effective Lamb wave-based blood analysis device has the powerful potential to be utilized in clinical settings.

Keywords: acoustomicrofluidics, blood analysis, coagulation, lamb wave

Procedia PDF Downloads 330
20291 Remarkable Difference in Neurotoxicity Between Two Phospholipases from Russell's Viper Venom: Insight Through Molecular Approach

Authors: Kalyan S. Ghosh, B. L. Dhananjaya

Abstract:

Snake bite causes fatal injuries in multi-organs and even many deaths due to several adverse physiological effects of various phospholipases (PLA2s) present in snake venom. Though these PLA2s bear highly homologues sequences and also structure but exhibit a different extent of those pharmacological effects. In this study, we have explored the difference in the neurotoxicity of two PLA2 namely PLA2-V, PLA2-VIIIa present in the venom from Vipera russellii. Bioinformatics studies on sequences of these two proteins along with detailed structural comparison enable us to explore the differences unambiguously. The identification of the residues involved in neurotoxicity will further lead towards proper designing of inhibitors against such killing effects of the venom.

Keywords: electrostatic potential, homology modeling, hydrophobicity, neurotoxicity, Phospholipase A2

Procedia PDF Downloads 424
20290 Improved Simultaneous Performance in the Time Domain and in the Frequency Domain

Authors: Azeddine Ghodbane, David Bensoussan, Maher Hammami

Abstract:

An innovative approach for controlling unstable and invertible systems has demonstrated superior performance compared to conventional controllers. It has been successfully applied to a levitation system and drone control. Simulations have yielded satisfactory performances when applied to a satellite antenna controller. This design method, based on sensitivity analysis, has also been extended to handle multivariable unstable and invertible systems that exhibit dominant diagonal characteristics at high frequencies, enabling decentralized control. Furthermore, this control method has been expanded to the realm of adaptive control. In this study, we introduce an alternative adaptive architecture that enhances both time and frequency performance, helpfully mitigating the effects of disturbances from the input plant and external disturbances affecting the output. To facilitate superior performance in both the time and frequency domains, we have developed user-friendly interactive design methods using the GeoGebra platform.

Keywords: control theory, decentralized control, sensitivity theory, input-output stability theory, robust multivariable feedback control design

Procedia PDF Downloads 102
20289 Optimal Perturbation in an Impulsively Blocked Channel Flow

Authors: Avinash Nayak, Debopam Das

Abstract:

The current work implements the variational principle to find the optimum initial perturbation that provides maximum growth in an impulsively blocked channel flow. The conventional method for studying temporal stability has always been through modal analysis. In most of the transient flows, this modal analysis is still followed with the quasi-steady assumption, i.e. change in base flow is much slower compared to perturbation growth rate. There are other studies where transient analysis on time dependent flows is done by formulating the growth of perturbation as an initial value problem. But the perturbation growth is sensitive to the initial condition. This study intends to find the initial perturbation that provides the maximum growth at a later time. Here, the expression of base flow for blocked channel is derived and the formulation is based on the two dimensional perturbation with stream function representing the perturbation quantity. Hence, the governing equation becomes the Orr-Sommerfeld equation. In the current context, the cost functional is defined as the ratio of disturbance energy at a terminal time 'T' to the initial energy, i.e. G(T) = ||q(T)||2/||q(0)||2 where q is the perturbation and ||.|| defines the norm chosen. The above cost functional needs to be maximized against the initial perturbation distribution. It is achieved with the constraint that perturbation follows the basic governing equation, i.e. Orr-Sommerfeld equation. The corresponding adjoint equation is derived and is solved along with the basic governing equation in an iterative manner to provide the initial spatial shape of the perturbation that provides the maximum growth G (T). The growth rate is plotted against time showing the development of perturbation which achieves an asymptotic shape. The effects of various parameters, e.g. Reynolds number, are studied in the process. Thus, the study emphasizes on the usage of optimal perturbation and its growth to understand the stability characteristics of time dependent flows. The assumption of quasi-steady analysis can be verified against these results for the transient flows like impulsive blocked channel flow.

Keywords: blocked channel flow, calculus of variation, hydrodynamic stability, optimal perturbation

Procedia PDF Downloads 417
20288 Age Determination from Epiphyseal Union of Bones at Shoulder Joint in Girls of Central India

Authors: B. Tirpude, V. Surwade, P. Murkey, P. Wankhade, S. Meena

Abstract:

There is no statistical data to establish variation in epiphyseal fusion in girls in central India population. This significant oversight can lead to exclusion of persons of interest in a forensic investigation. Epiphyseal fusion of proximal end of humerus in eighty females were analyzed on radiological basis to assess the range of variation of epiphyseal fusion at each age. In the study, the X ray films of the subjects were divided into three groups on the basis of degree of fusion. Firstly, those which were showing No Epiphyseal Fusion (N), secondly those showing Partial Union (PC), and thirdly those showing Complete Fusion (C). Observations made were compared with the previous studies.

Keywords: epiphyseal union, shoulder joint, proximal end of humerus

Procedia PDF Downloads 481
20287 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 126
20286 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study

Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener

Abstract:

Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.

Keywords: lower limb loss, amputee, gait variability, gait analyses

Procedia PDF Downloads 274
20285 A Physically-Based Analytical Model for Reduced Surface Field Laterally Double Diffused MOSFETs

Authors: M. Abouelatta, A. Shaker, M. El-Banna, G. T. Sayah, C. Gontrand, A. Zekry

Abstract:

In this paper, a methodology for physically modeling the intrinsic MOS part and the drift region of the n-channel Laterally Double-diffused MOSFET (LDMOS) is presented. The basic physical effects like velocity saturation, mobility reduction, and nonuniform impurity concentration in the channel are taken into consideration. The analytical model is implemented using MATLAB. A comparison of the simulations from technology computer aided design (TCAD) and that from the proposed analytical model, at room temperature, shows a satisfactory accuracy which is less than 5% for the whole voltage domain.

Keywords: LDMOS, MATLAB, RESURF, modeling, TCAD

Procedia PDF Downloads 186
20284 Fabrication of LiNbO₃ Based Conspicuous Nanomaterials for Renewable Energy Devices

Authors: Riffat Kalsoom, Qurat-Ul-Ain Javed

Abstract:

Optical and dielectric properties of lithium niobates have made them the fascinating materials to be used in optical industry for device formation such as Q and optical switching. Synthesis of lithium niobates was carried out by solvothermal process with and without temperature fluctuation at 200°C for 4 hrs, and behavior of properties for different durations was also examined. Prepared samples of LiNbO₃ were examined in a way as crystallographic phases by using XRD diffractometer, morphology by scanning electron microscope (SEM), absorption by UV-Visible Spectroscopy and dielectric measurement by impedance analyzer. A structural change from trigonal to spherical shape was observed by changing the time of reaction. Crystallite size decreases by the temperature fluctuation and increasing reaction time. Band gap decreases whereas dielectric constant and dielectric loss was increased with increasing time of reaction. Trend of AC conductivity is explained by Joschner’s power law. Due to these significant properties, it finds its applications in devices, such as cells, Q switching and optical switching for laser and gigahertz frequencies, respectively and these applications depend on the industrial demands.

Keywords: lithium niobates, renewable energy devices, controlled structure, temperature fluctuations

Procedia PDF Downloads 122
20283 Assessment of the Quality of a Mixture of Vegetable Oils from Kazakhstan Origin

Authors: Almas Mukhametov, Dina Dautkanova, Moldir Yerbulekova, Gulim Tuyakova, Raziya Zhakudaeva, Makpal Seisenaly, Asemay Kazhymurat

Abstract:

The composition of samples of mixtures of vegetable oils of Kazakhstan origin, consisting of sunflower, safflower and linseed oils, has been experimentally substantiated. With an approximate optimal ratio of w-6:w-3 fatty acids in 80:15:05 triacylglycerols, providing its therapeutic and prophylactic properties. The resulting mixture can be used in the development of functional products. The result was also identified and evaluated by physical and chemical quality indicators, the content of vitamin E, and the concentration of ions of copper (Cu), iron (Fe), cadmium (Cd), lead (Pb), arsenic (As), nickel (Ni), as well as mercury (Hg).

Keywords: vegetable oil, sunflower, safflower, linseed, mixture, fatty acid composition, heavy metals

Procedia PDF Downloads 173
20282 Pion/Muon Identification in a Nuclear Emulsion Cloud Chamber Using Neural Networks

Authors: Kais Manai

Abstract:

The main part of this work focuses on the study of pion/muon separation at low energy using a nuclear Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The work consists of two parts: particle reconstruction algorithm and a Neural Network that assigns to each reconstructed particle the probability to be a muon or a pion. The pion/muon separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data. The algorithm allows to achieve a 60% muon identification efficiency with a pion misidentification smaller than 3%.

Keywords: nuclear emulsion, particle identification, tracking, neural network

Procedia PDF Downloads 491
20281 Advanced Technology for Natural Gas Liquids (NGL) Recovery Using Residue Gas Split

Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel

Abstract:

The competitive scenario of the oil and gas market is a challenge for today’s plant designers to achieve designs that meet client expectations with shrinking budgets, safety requirements, and operating flexibility. Natural Gas Liquids have three main industrial uses. They can be used as fuels, or as petrochemical feedstock or as refinery blends that can be further processed and sold as straight run cuts, such as naphtha, kerosene and gas oil. NGL extraction is not a chemical reaction. It involves the separation of heavier hydrocarbons from the main gas stream through pressure as temperature reduction, which depending upon the degree of NGL extraction may involve cryogenic process. Previous technologies i.e. short cycle dry desiccant absorption, Joule-Thompson or Low temperature refrigeration, lean oil absorption have been giving results of only 40 to 45% ethane recoveries, which were unsatisfying depending upon the current scenario of down turn market. Here new technology has been suggested for boosting up the recoveries of ethane+ up to 95% and up to 99% for propane+ components. Cryogenic plants provide reboiling to demethanizers by using part of inlet feed gas, or inlet feed split. If the two stream temperatures are not similar, there is lost work in the mixing operation unless the designer has access to some proprietary design. The concept introduced in this process consists of reboiling the demethanizer with the residue gas, or residue gas split. The innovation of this process is that it does not use the typical inlet gas feed split type of flow arrangement to reboil the demethanizer or deethanizer column, but instead uses an open heat pump scheme to that effect. The residue gas compressor provides the heat pump effect. The heat pump stream is then further cooled and entered in the top section of the column as a cold reflux. Because of the nature of this design, this process offers the opportunity to operate at full ethane rejection or recovery. The scheme is also very adaptable to revamp existing facilities. This advancement can be proven not only in enhancing the results but also provides operational flexibility, optimize heat exchange, introduces equipment cost reduction, opens a future for the innovative designs while keeping execution costs low.

Keywords: deethanizer, demethanizer, residue gas, NGL

Procedia PDF Downloads 259
20280 Inbreeding Study Using Runs of Homozygosity in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

The best linear unbiased predictor (BLUP) is a method commonly used in genetic evaluations of breeding programs. However, this approach can lead to higher inbreeding coefficients in the population due to the intensive use of few bulls with higher genetic potential, usually presenting some degree of relatedness. High levels of inbreeding are associated to low genetic viability, fertility, and performance for some economically important traits and therefore, should be constantly monitored. Unreliable pedigree data can also lead to misleading results. Genomic information (i.e., single nucleotide polymorphism – SNP) is a useful tool to estimate the inbreeding coefficient. Runs of homozygosity have been used to evaluate homozygous segments inherited due to direct or collateral inbreeding and allows inferring population selection history. This study aimed to evaluate runs of homozygosity (ROH) and inbreeding in a population of Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip and the quality control was carried out excluding SNPs located in non-autosomal regions, with unknown position, with a p-value in the Hardy-Weinberg equilibrium lower than 10⁻⁵, call rate lower than 0.98 and samples with the call rate lower than 0.90. After the quality control, 809 animals and 509,107 SNPs remained for analyses. For the ROH analysis, PLINK software was used considering segments with at least 50 SNPs with a minimum length of 1Mb in each animal. The inbreeding coefficient was calculated using the ratio between the sum of all ROH sizes and the size of the whole genome (2,548,724kb). A total of 25.711 ROH were observed, presenting mean, median, minimum, and maximum length of 3.34Mb, 2Mb, 1Mb, and 80.8Mb, respectively. The number of SNPs present in ROH segments varied from 50 to 14.954. The longest ROH length was observed in one animal, which presented a length of 634Mb (24.88% of the genome). Four bulls were among the 10 animals with the longest extension of ROH, presenting 11% of ROH with length higher than 10Mb. Segments longer than 10Mb indicate recent inbreeding. Therefore, the results indicate an intensive use of few sires in the studied data. The distribution of ROH along the chromosomes showed that chromosomes 5 and 6 presented a large number of segments when compared to other chromosomes. The mean, median, minimum, and maximum inbreeding coefficients were 5.84%, 5.40%, 0.00%, and 24.88%, respectively. Although the mean inbreeding was considered low, the ROH indicates a recent and intensive use of few sires, which should be avoided for the genetic progress of breed.

Keywords: autozygosity, Bos taurus indicus, genomic information, single nucleotide polymorphism

Procedia PDF Downloads 142
20279 Advanced Materials Based on Ethylene-Propylene-Diene Terpolymers and Organically Modified Montmorillonite

Authors: M. D. Stelescu, E. Manaila, G. Pelin, M. Georgescu, M. Sonmez

Abstract:

This paper presents studies on the development and characterization of nanocomposites based on ethylene-propylene terpolymer rubber (EPDM), chlorobutyl rubber (IIR-Cl) and organically modified montmorillonite (OMMT). Mixtures were made containing 0, 3 and 6 phr (parts per 100 parts rubber) OMMT, respectively. They were obtained by melt intercalation in an internal mixer - Plasti-Corder Brabender, in suitable blending parameters, at high temperature for 11 minutes. Curing agents were embedded on a laboratory roller at 70-100 ºC, friction 1:1.1, processing time 5 minutes. Rubber specimens were obtained by compression, using a hydraulic press at 165 ºC and a pressing force of 300 kN. Curing time, determined using the Monsanto rheometer, decreases with the increased amount of OMMT in the mixtures. At the same time, it was noticed that mixtures containing OMMT show improvement in physical-mechanical properties. These types of nanocomposites may be used to obtain rubber seals for the space application or for other areas of application.

Keywords: chlorobutyl rubber, ethylene-propylene-diene terpolymers, montmorillonite, rubber seals, space application

Procedia PDF Downloads 166
20278 Reductions of Control Flow Graphs

Authors: Robert Gold

Abstract:

Control flow graphs are a well-known representation of the sequential control flow structure of programs with a multitude of applications. Not only single functions but also sets of functions or complete programs can be modelled by control flow graphs. In this case the size of the graphs can grow considerably and thus makes it difficult for software engineers to analyse the control flow. Graph reductions are helpful in this situation. In this paper we define reductions to subsets of nodes. Since executions of programs are represented by paths through the control flow graphs, paths should be preserved. Furthermore, the composition of reductions makes a stepwise analysis approach possible.

Keywords: control flow graph, graph reduction, software engineering, software applications

Procedia PDF Downloads 538
20277 The Influence of Modernity and Globalization upon Language: The Korean Language between Confucianism and Americanization

Authors: Raluca-Ioana Antonescu

Abstract:

The field research of the paper stands at the intersection between Linguistics and Sociology, while the problem of the research is the importance of language in the modernization process and in a globalized society. The research objective is to prove that language is a stimulant for modernity, while it defines the tradition and the culture of a specific society. In order to examine the linguistic change of the Korean language due to the modernity and globalization, the paper tries to answer one main question, What are the changes the Korean language underwent from a traditional version of Korean, towards one influenced by modernity?, and two secondary questions, How are explored in specialized literature the relations between globalization (and modernity) and culture (focusing on language)? and What influences the Korean language? For the purpose of answering the research questions, the paper has the main premise that due to modernity and globalization, the Korean language changed its discourse construction, and two secondary hypothesis, first is that in literature there are not much explored the relations between culture and modernity focusing on the language discourse construction, but more about identity issue and commodification problems, and the second hypothesis is that the Korean language is influenced by traditional values (like Confucianism) while receiving influence also of globalization process (especially from English language). In terms of methodology, the paper will analyze the two main influences upon the Korean language, referring to traditionalism (being defined as the influence of Confucianism) and modernism (as the influence of other countries’ language and culture), and how the Korean language it was constructed and modified due to these two elements. The paper will analyze at what level (grammatical, lexical, etc.) the traditionalism help at the construction of the Korean language, and what are the changes at each level that modernism brought along. As for the results of this research, the influence of modernism changed both lexically and grammatically the Korean language. In 60 years the increase of English influence is astonishing, and this paper shows the main changes the Korean language underwent, like the loanwords (Konglish), but also the reduction of the speech levels and the ease of the register variation use. Therefore the grammatical influence of modernity and globalization could be seen at the reduction of the speech level and register variation, while the lexical change comes with the influence of English language especially, where about 10% of the Korean vocabulary is considered to be loanwords. Also the paper presents the interrelation between traditionalism and modernity, with the example of Konglish, but not only (we can consider also the Korean greetings which are translated by Koreans when they speak in other languages, bringing their cultural characteristics in English discourse construction), which makes the Koreans global, since they speak in an international language, but still local since they cannot get rid completely of their culture.

Keywords: Confucianism, globalization, language and linguistic change, modernism, traditionalism

Procedia PDF Downloads 189
20276 Fear of Isolation, Online Efficacy, and Selective Exposure in Online Political Discourse

Authors: Kyujin Shim

Abstract:

This study explores how individual motivations in political psychology will lead to political expression and online discourse, and how those online political discourses result in individuals’ exposure to extreme/ personally-entertaining/ disinhibiting content. This study argues that a new framework beyond the conventional paradigm (e.g., selective exposure based on partisanship/ ideology) is needed for better grasp of non-ideological/ anarchic, and/or of nonpartisan yet anonymity-/ extremity-/ disinhibition-related online behaviors regarding political conversations. Further, this study proposes a new definition of ‘selective exposure,’ with special attention to online efficacy and psychological motivations/gratifications sought in the online sphere.

Keywords: selective exposure, fear of isolation, political psychology, online discourse

Procedia PDF Downloads 422
20275 Response of a Bridge Crane during an Earthquake

Authors: F. Fekak, A. Gravouil, M. Brun, B. Depale

Abstract:

During an earthquake, a bridge crane may be subjected to multiple impacts between crane wheels and rail. In order to model such phenomena, a time-history dynamic analysis with a multi-scale approach is performed. The high frequency aspect of the impacts between wheels and rails is taken into account by a Lagrange explicit event-capturing algorithm based on a velocity-impulse formulation to resolve contacts and impacts. An implicit temporal scheme is used for the rest of the structure. The numerical coupling between the implicit and the explicit schemes is achieved with a heterogeneous asynchronous time-integrator.

Keywords: bridge crane, earthquake, dynamic analysis, explicit, implicit, impact

Procedia PDF Downloads 292
20274 The Big Bang Was Not the Beginning, but a Repeating Pattern of Expansion and Contraction of the Spacetime

Authors: Amrit Ladhani

Abstract:

The cyclic universe theory is a model of cosmic evolution according to which the universe undergoes endless cycles of expansion and cooling, each beginning with a “big bang” and ending in a “big crunch”. In this paper, we propose a unique property of Space-time. This particular and marvelous nature of space shows us that space can stretch, expand, and shrink. This property of space is caused by the size of the universe change over time: growing or shrinking. The observed accelerated expansion, which relates to the stretching of Shrunk space for the new theory, is derived. This theory is based on three underlying notions: First, the Big Bang is not the beginning of Space-time, but rather, at the very beginning fraction of a second, there was an infinite force of infinite Shrunk space in the cosmic singularity that force gave rise to the big bang and caused the rapidly growing of space, and all other forms of energy are transformed into new matter and radiation and a new period of expansion and cooling begins. Second, there was a previous phase leading up to it, with multiple cycles of contraction and expansion that repeat indefinitely. Third, the two principal long-range forces are the gravitational force and the repulsive force generated by shrink space. They are the two most fundamental quantities in the universe that govern cosmic evolution. They may provide the clockwork mechanism that operates our eternal cyclic universe. The universe will not continue to expand forever; no need, however, for dark energy and dark matter. This new model of Space-time and its unique properties enables us to describe a sequence of events from the Big Bang to the Big Crunch.

Keywords: dark matter, dark energy, cosmology, big bang and big crunch

Procedia PDF Downloads 67
20273 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA

Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata

Abstract:

We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.

Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time

Procedia PDF Downloads 541
20272 What Happens When We Try to Bridge the Science-Practice Gap? An Example from the Brazilian Native Vegetation Protection Law

Authors: Alice Brites, Gerd Sparovek, Jean Paul Metzger, Ricardo Rodrigues

Abstract:

The segregation between science and policy in decision making process hinders nature conservation efforts worldwide. Scientists have been criticized for not producing information that leads to effective solutions for environmental problems. In an attempt to bridge this gap between science and practice, we conducted a project aimed at supporting the implementation of the Brazilian Native Vegetation Protection Law (NVPL) implementation in São Paulo State (SP), Brazil. To do so, we conducted multiple open meetings with the stakeholders involved in this discussion. Throughout this process, we raised stakeholders' demands for scientific information and brought feedbacks about our findings. However, our main scientific advice was not taken into account during the NVPL implementation in SP. The NVPL has a mechanism that exempts landholders who converted native vegetation without offending the legislation in place at the time of the conversion from restoration requirements. We found out that there were no accurate spatialized data for native vegetation cover before the 1960s. Thus, the initial benchmark for the mechanism application should be the 1965 Brazilian Forest Act. Even so, SP kept the 1934 Brazilian Forest Act as the initial legal benchmark for the law application. This decision implies the use of a probabilistic native vegetation map that has uncertainty and subjectivity as its intrinsic characteristics, thus its use can lead to legal queries, corruption, and an unfair benefit application. But why this decision was made even after the scientific advice was vastly divulgated? We raised some possible reasons to explain it. First, the decision was made during a government transition, showing that circumstantial political events can overshadow scientific arguments. Second, the debate about the NVPL in SP was not pacified and powerful stakeholders could benefit from the confusion created by this decision. Finally, the native vegetation protection mechanism is a complex issue, with many technical aspects that can be hard to understand for a non-specialized courtroom, such as the one that made the final decision at SP. This example shows that science and decision-makers still have a long way ahead to improve their way to interact and that science needs to find its way to be heard above the political buzz.

Keywords: Brazil, forest act, science-based dialogue, science-policy interface

Procedia PDF Downloads 113
20271 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules

Authors: Michael Naderhirn, Marco Pavone

Abstract:

Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.

Keywords: formal system verification, reachability, real time controller, hybrid system

Procedia PDF Downloads 233
20270 Human Par14 and Par17 Isomerases Bind Hepatitis B Virus Components Inside and Out

Authors: Umar Saeed

Abstract:

Peptidyl-prolyl cis/trans isomerases Par14 and Par17 in humans play crucial roles in diverse cellular processes, including protein folding, chromatin remodeling, DNA binding, ribosome biogenesis, and cell cycle progression. However, the effects of Par14 and Par17 on viral replication have been explored to a limited extent. We first time discovered their influential roles in promoting Hepatitis B Virus replication. In this study, we observed that in the presence of HBx, either Par14 or Par17 could upregulate HBV replication. However, in the absence of HBx, neither Par14 nor Par17 had any effect on replication. Their mechanism of action involves binding to specific motifs within HBc and HBx proteins. Notably, they target the conserved 133Arg-Pro134 (RP) motif of HBc and the 19RP20-28RP29 motifs of HBx. This interaction is fundamental for the stability of HBx, core particles, and HBc. Par14 and Par17 exhibit versatility by binding both outside and inside core particles, thereby facilitating core particle assembly through their participation in HBc dimer-dimer interactions. NAGE and immunoblotting analyses unveiled the binding of Par14/Par17 to core particles. Co-immunoprecipitation experiments further demonstrated the interaction of Par14/Par17 with core particle assembly-defective and dimer-positive HBc-Y132A. It's essential to emphasize that R133 is the key residue in the HBc RP motif that governs their interaction with Par14/Par17. Chromatin immunoprecipitation conducted on HBV-infected cells elucidated the participation of residues S19 and E46/D74 in Par14 and S44 and E71/D99 in Par17 in the recruitment of 133RP134 motif-containing HBc into cccDNA. Depleting PIN4 in liver cell lines results in a significant reduction in cccDNA levels, pgRNA, sgRNAs, HBc, core particle assembly, and HBV DNA synthesis. Notably, parvulin inhibitors like juglone and PiB have proven to be effective in substantially reducing HBV replication. These inhibitors weaken the interaction between HBV core particles and Par14/Par17, underscoring the dynamic nature of this interaction. It's also worth noting that specific Par14/Par17 inhibitors hold promise as potential therapeutic options for chronic hepatitis B.

Keywords: Par14Par17, HBx, HBc, cccDNA, HBV

Procedia PDF Downloads 55
20269 Environmentally Adaptive Acoustic Echo Suppression for Barge-in Speech Recognition

Authors: Jong Han Joo, Jung Hoon Lee, Young Sun Kim, Jae Young Kang, Seung Ho Choi

Abstract:

In this study, we propose a novel technique for acoustic echo suppression (AES) during speech recognition under barge-in conditions. Conventional AES methods based on spectral subtraction apply fixed weights to the estimated echo path transfer function (EPTF) at the current signal segment and to the EPTF estimated until the previous time interval. We propose a new approach that adaptively updates weight parameters in response to abrupt changes in the acoustic environment due to background noises or double-talk. Furthermore, we devised a voice activity detector and an initial time-delay estimator for barge-in speech recognition in communication networks. The initial time delay is estimated using log-spectral distance measure, as well as cross-correlation coefficients. The experimental results show that the developed techniques can be successfully applied in barge-in speech recognition systems.

Keywords: acoustic echo suppression, barge-in, speech recognition, echo path transfer function, initial delay estimator, voice activity detector

Procedia PDF Downloads 362