Search results for: downstream processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4160

Search results for: downstream processing

350 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 284
349 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 138
348 Expanding the Atelier: Design Lead Academic Project Using Immersive User-Generated Mobile Images and Augmented Reality

Authors: David Sinfield, Thomas Cochrane, Marcos Steagall

Abstract:

While there is much hype around the potential and development of mobile virtual reality (VR), the two key critical success factors are the ease of user experience and the development of a simple user-generated content ecosystem. Educational technology history is littered with the debris of over-hyped revolutionary new technologies that failed to gain mainstream adoption or were quickly superseded. Examples include 3D television, interactive CDROMs, Second Life, and Google Glasses. However, we argue that this is the result of curriculum design that substitutes new technologies into pre-existing pedagogical strategies that are focused upon teacher-delivered content rather than exploring new pedagogical strategies that enable student-determined learning or heutagogy. Visual Communication design based learning such as Graphic Design, Illustration, Photography and Design process is heavily based on the traditional forms of the classroom environment whereby student interaction takes place both at peer level and indeed teacher based feedback. In doing so, this makes for a healthy creative learning environment, but does raise other issue in terms of student to teacher learning ratios and reduced contact time. Such issues arise when students are away from the classroom and cannot interact with their peers and teachers and thus we see a decline in creative work from the student. Using AR and VR as a means of stimulating the students and to think beyond the limitation of the studio based classroom this paper will discuss the outcomes of a student project considering the virtual classroom and the techniques involved. The Atelier learning environment is especially suited to the Visual Communication model as it deals with the creative processing of ideas that needs to be shared in a collaborative manner. This has proven to have been a successful model over the years, in the traditional form of design education, but has more recently seen a shift in thinking as we move into a more digital model of learning and indeed away from the classical classroom structure. This study focuses on the outcomes of a student design project that employed Augmented Reality and Virtual Reality technologies in order to expand the dimensions of the classroom beyond its physical limits. Augmented Reality when integrated into the learning experience can improve the learning motivation and engagement of students. This paper will outline some of the processes used and the findings from the semester-long project that took place.

Keywords: augmented reality, blogging, design in community, enhanced learning and teaching, graphic design, new technologies, virtual reality, visual communications

Procedia PDF Downloads 240
347 Monitoring Potential Temblor Localities as a Supplemental Risk Control System

Authors: Mikhail Zimin, Svetlana Zimina, Maxim Zimin

Abstract:

Without question, the basic method of prevention of human and material losses is the provision for adequate strength of constructions. At the same time, seismic load has a stochastic character. So, at all times, there is little danger of earthquake forces exceeding the selected design load. This risk is very low, but the consequences of such events may be extremely serious. Very dangerous are also occasional mistakes in seismic zoning, soil conditions changing before temblors, and failure to take into account hazardous natural phenomena caused by earthquakes. Besides, it is known that temblors detrimentally affect the environmental situation in regions where they occur, resulting in panic and worsening various disease courses. It may lead to mistakes of personnel of hazardous production facilities like the production and distribution of gas and oil, which may provoke severe accidents. In addition, gas and oil pipelines often have long mileage and cross many perilous zones by contrast with buildings. This situation increases the risk of heavy accidents. In such cases, complex monitoring of potential earthquake localities would be relevant. Even though the number of successful real-time forecasts of earthquakes is not great, it is well in excess, such as may be under random guessing. Experimental performed time-lapse study and analysis consist of searching seismic, biological, meteorological, and light earthquake precursors, processing such data with the help of fuzzy sets, collecting weather information, utilizing a database of terrain, and computing risk of slope processes under the temblor in a given setting. Works were done in a real-time environment and broadly acceptable results took place. Observations from already in-place seismic recording systems are used. Furthermore, a look back study of precursors of known earthquakes is done. Situations before Ashkhabad, Tashkent, and Haicheng seismic events are analyzed. Fairish findings are obtained. Results of earthquake forecasts can be used for predicting dangerous natural phenomena caused by temblors such as avalanches and mudslides. They may also be utilized for prophylaxis of some diseases and their complications. Relevant software is worked out too. It should be emphasized that such control does not require serious financial expenses and can be performed by a small group of professionals. Thus, complex monitoring of potential earthquake localities, including short-term earthquake forecasts and analysis of possible hazardous consequences of temblors, may further the safety of pipeline facilities.

Keywords: risk, earthquake, monitoring, forecast, precursor

Procedia PDF Downloads 24
346 A Collaborative Approach to Improving Mental and Physical Health-Related Outcomes for a Heart Transplant Patient Through Music and Art Therapy Treatment

Authors: Elizabeth Laguaite, Alexandria Purdy

Abstract:

Heart transplant recipients face psycho-physiological stressors, including pain, lengthy hospitalizations, delirium, and existential crises. They pose an increased risk for Post Traumatic Stress Disorder (PTSD) and can be a predictor of poorer mental and physical Health-Related Quality of Life (HRQOL) outcomes and increased mortality. There is limited research on the prevention of Post Traumatic Stress Symptoms (PTSS) in transplant patients. This case report focuses on a collaborative Music and Art Therapy intervention used to improve outcomes for HMH transplant recipient John (Alias). John, a 58-year-old man with congestive heart failure, was admitted to HMH in February of 2021 with cardiogenic shock, cannulated with an Intra-aortic Balloon Pump, Impella 5.5, and Venoarterial Extracorporeal Membrane Oxygenation (VA-ECMO) as a bridge to heart and kidney transplant. He was listed as status 1 for transplant. Music Therapy and Art Therapy (MT and AT) were ordered by the physician for mood regulation, trauma processing and anxiety management. During MT/AT sessions, John reported a history of anxiety and depression exacerbated by medical acuity, shortness of breath, and lengthy hospitalizations. He expressed difficulty sleeping, pain, and existential questions. Initially seen individually by MT/AT, it was determined he could benefit from a collaborative approach due to similar thematic content within sessions. A Life Review intervention was developed by MT/AT. The purpose was for him to creatively express, reflect and process his medical narrative, including the identification of positive and negative events leading up to admission at HMH, the journey to transplant, and his hope for the future. Through this intervention, he created artworks that symbolized each event and paired them with songs, two of which were composed with the MT during treatment. As of September 2023, John has not been readmitted to the hospital and expressed that this treatment is what “got him through transplant”. MT and AT can provide opportunities for a patient to reminisce through creative expression, leading to a shift in the personal meaning of these experiences, promoting resolution, and ameliorating associated trauma. The closer to trauma it is processed, the less likely to develop PTSD. This collaborative MT/AT approach could improve long-term outcomes by reducing mortality and readmission rates for transplant patients.

Keywords: art therapy, music therapy, critical care, PTSD, trauma, transplant

Procedia PDF Downloads 81
345 Overcoming Obstacles in UHTHigh-protein Whey Beverages by Microparticulation Process: Scientific and Technological Aspects

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh, Seyed Jalal Razavi Zahedkolaei

Abstract:

Herein, a shelf stable (no refrigeration required) UHT processed, aseptically packaged whey protein drink was formulated by using a new strategy in microparticulate process. Applying thermal and two-dimensional mechanical treatments simultaneously, a modified protein (MWPC-80) was produced. Then the physical, thermal and thermodynamic properties of MWPC-80 were assessed using particle size analysis, dynamic temperature sweep (DTS), and differential scanning calorimetric (DSC) tests. Finally, using MWPC-80, a new RTD beverage was formulated, and shelf stability was assessed for three months at ambient temperature (25 °C). Non-isothermal dynamic temperature sweep was performed, and the results were analyzed by a combination of classic rate equation, Arrhenius equation, and time-temperature relationship. Generally, results showed that temperature dependency of the modified sample was significantly (Pvalue<0.05) less than the control one contained WPC-80. The changes in elastic modulus of the MWPC did not show any critical point at all the processed stages, whereas, the control sample showed two critical points during heating (82.5 °C) and cooling (71.10 °C) stages. Thermal properties of samples (WPC-80 & MWPC-80) were assessed using DSC with 4 °C /min heating speed at 20-90 °C heating range. Results did not show any thermal peak in MWPC DSC curve, which suggested high thermal resistance. On the other hands, WPC-80 sample showed a significant thermal peak with thermodynamic properties of ∆G:942.52 Kj/mol ∆H:857.04 Kj/mole and ∆S:-1.22Kj/mole°K. Dynamic light scattering was performed and results showed 0.7 µm and 15 nm average particle size for MWPC-80 and WPC-80 samples, respectively. Moreover, particle size distribution of MWPC-80 and WPC-80 were Gaussian-Lutresian and normal, respectively. After verification of microparticulation process by DTS, PSD and DSC analyses, a 10% why protein beverage (10% w/w/ MWPC-80, 0.6% w/w vanilla flavoring agent, 0.1% masking flavor, 0.05% stevia natural sweetener and 0.25% citrate buffer) was formulated and UHT treatment was performed at 137 °C and 4 s. Shelf life study did not show any jellification or precipitation of MWPC-80 contained beverage during three months storage at ambient temperature, whereas, WPC-80 contained beverage showed significant precipitation and jellification after thermal processing, even at 3% w/w concentration. Consumer knowledge on nutritional advantages of whey protein increased the request for using this protein in different food systems especially RTD beverages. These results could make a huge difference in this industry.

Keywords: high protein whey beverage, micropartiqulation, two-dimentional mechanical treatments, thermodynamic properties

Procedia PDF Downloads 75
344 Olive Stone Valorization to Its Application on the Ceramic Industry

Authors: M. Martín-Morales, D. Eliche-Quesada, L. Pérez-Villarejo, M. Zamorano

Abstract:

Olive oil is a product of particular importance within the Mediterranean and Spanish agricultural food system, and more specifically in Andalusia, owing to be the world's main production area. Olive oil processing generates olive stones which are dried and cleaned to remove pulp and olive stones fines to produce biofuel characterized to have high energy efficiency in combustion processes. Olive stones fine fraction is not too much appreciated as biofuel, so it is important the study of alternative solutions to be valorized. Some researchers have studied recycling different waste to produce ceramic bricks. The main objective of this study is to investigate the effects of olive stones addition on the properties of fired clay bricks for building construction. Olive stones were substituted by volume (7.5%, 15%, and 25%) to brick raw material in three different sizes (lower than 1 mm, lower than 2 mm and between 1 and 2 mm). In order to obtain comparable results, a series without olive stones was also prepared. The prepared mixtures were compacted in laboratory type extrusion under a pressure of 2.5MPa for rectangular shaped (30 mm x 60 mm x 10 mm). Dried and fired industrial conditions were applied to obtain laboratory brick samples. Mass loss after sintering, bulk density, porosity, water absorption and compressive strength of fired samples were investigated and compared with a sample manufactured without biomass. Results obtained have shown that olive stone addition decreased mechanical properties due to the increase in water absorption, although values tested satisfied the requirements in EN 772-1 about methods of test for masonry units (Part 1: Determination of compressive strength). Finally, important advantages related to the properties of bricks as well as their environmental effects could be obtained with the use of biomass studied to produce ceramic bricks. The increasing of the percentage of olive stones incorporated decreased bulk density and then increased the porosity of bricks. On the one hand, this lower density supposes a weight reduction of bricks to be transported, handled as well as the lightening of building; on the other hand, biomass in clay contributes to auto thermal combustion which involves lower fuel consumption during firing step. Consequently, the production of porous clay bricks using olive stones could reduce atmospheric emissions and improve their life cycle assessment, producing eco-friendly clay bricks.

Keywords: clay bricks, olive stones, sustainability, valorization

Procedia PDF Downloads 153
343 Envy and Schadenfreude Domains in a Model of Neurodegeneration

Authors: Hernando Santamaría-García, Sandra Báez, Pablo Reyes, José Santamaría-García, Diana Matallana, Adolfo García, Agustín Ibañez

Abstract:

The study of moral emotions (i.e., Schadenfreude and envy) is critical to understand the ecological complexity of everyday interactions between cognitive, affective, and social cognition processes. Most previous studies in this area have used correlational imaging techniques and framed Schadenfreude and envy as monolithic domains. Here, we profit from a relevant neurodegeneration model to disentangle the brain regions engaged in three dimensions of Schadenfreude and envy: deservingness, morality, and legality. We tested 20 patients with behavioral variant frontotemporal dementia (bvFTD), 24 patients with Alzheimer’s disease (AD), as a contrastive neurodegeneration model, and 20 healthy controls on a novel task highlighting each of these dimensions in scenarios eliciting Schadenfreude and envy. Compared with the AD and control groups, bvFTD patients obtained significantly higher scores on all dimensions for both emotions. Interestingly, the legal dimension for both envy and Schadenfreude elicited higher emotional scores than the deservingness and moral dimensions. Furthermore, correlational analyses in bvFTD showed that higher envy and Schadenfreude scores were associated with greater deficits in social cognition, inhibitory control, and behavior. Brain anatomy findings (restricted to bvFTD and controls) confirmed differences in how these groups process each dimension. Schadenfreude was associated with the ventral striatum in all subjects. Also, in bvFTD patients, increased Schadenfreude across dimensions was negatively correlated with regions supporting social-value rewards, mentalizing, and social cognition (frontal pole, temporal pole, angular gyrus and precuneus). In all subjects, all dimensions of envy positively correlated with the volume of the anterior cingulate cortex, a region involved in processing unfair social comparisons. By contrast, in bvFTD patients, the intensified experience of envy across all dimensions was negatively correlated with a set of areas subserving social cognition, including the prefrontal cortex, the parahippocampus, and the amygdala. Together, the present results provide the first lesion-based evidence for the multidimensional nature of the emotional experiences of envy and Schadenfreude. Moreover, this is the first demonstration of a selective exacerbation of envy and Schadenfreude in bvFTD patients, probably triggered by atrophy to social cognition networks. Our results offer new insights into the mechanisms subserving complex emotions and moral cognition in neurodegeneration, paving the way for groundbreaking research on their interaction with other cognitive, social, and emotional processes.

Keywords: social cognition, moral emotions, neuroimaging, frontotemporal dementia

Procedia PDF Downloads 293
342 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods

Authors: Mohammad Arabi

Abstract:

The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.

Keywords: electric motor, fault detection, frequency features, temporal features

Procedia PDF Downloads 52
341 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 138
340 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 122
339 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 144
338 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation

Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu

Abstract:

Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.

Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing

Procedia PDF Downloads 162
337 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications

Authors: Ashima Sharma, Tapan K. Chaudhuri

Abstract:

Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.

Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing

Procedia PDF Downloads 347
336 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 133
335 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery

Authors: Bencherif Kada

Abstract:

In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, diversity, shrublands

Procedia PDF Downloads 128
334 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 88
333 Impact of Microwave and Air Velocity on Drying Kinetics and Rehydration of Potato Slices

Authors: Caiyun Liu, A. Hernandez-Manas, N. Grimi, E. Vorobiev

Abstract:

Drying is one of the most used methods for food preservation, which extend shelf life of food and makes their transportation, storage and packaging easier and more economic. The commonly dried method is hot air drying. However, its disadvantages are low energy efficiency and long drying times. Because of the high temperature during the hot air drying, the undesirable changes in pigments, vitamins and flavoring agents occur which result in degradation of the quality parameters of the product. Drying process can also cause shrinkage, case hardening, dark color, browning, loss of nutrients and others. Recently, new processes were developed in order to avoid these problems. For example, the application of pulsed electric field provokes cell membrane permeabilisation, which increases the drying kinetics and moisture diffusion coefficient. Microwave drying technology has also several advantages over conventional hot air drying, such as higher drying rates and thermal efficiency, shorter drying time, significantly improved product quality and nutritional value. Rehydration kinetics of dried product is a very important characteristic of dried products. Current research has indicated that the rehydration ratio and the coefficient of rehydration are dependent on the processing conditions of drying. The present study compares the efficiency of two processes (1: room temperature air drying, 2: microwave/air drying) in terms of drying rate, product quality and rehydration ratio. In this work, potato slices (≈2.2g) with a thickness of 2 mm and diameter of 33mm were placed in the microwave chamber and dried. Drying kinetics and drying rates of different methods were determined. The process parameters included inlet air velocity (1 m/s, 1.5 m/s, 2 m/s) and microwave power (50 W, 100 W, 200 W and 250 W) were studied. The evolution of temperature during microwave drying was measured. The drying power had a strong effect on drying rate, and the microwave-air drying resulted in 93% decrease in the drying time when the air velocity was 2 m/s and the power of microwave was 250 W. Based on Lewis model, drying rate constants (kDR) were determined. It was observed an increase from kDR=0.0002 s-1 to kDR=0.0032 s-1 of air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The effective moisture diffusivity was calculated by using Fick's law. The results show an increase of effective moisture diffusivity from 7.52×10-11 to 2.64×10-9 m2.s-1 for air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The temperature of the potato slices increased for higher microwaves power, but decreased for higher air velocity. The rehydration ratio, defined as the weight of the the sample after rehydration per the weight of dried sample, was determined at different water temperatures (25℃, 50℃, 75℃). The rehydration ratio increased with the water temperature and reached its maximum at the following conditions: 200 W for the microwave power, 2 m/s for the air velocity and 75°C for the water temperature. The present study shows the interest of microwave drying for the food preservation.

Keywords: drying, microwave, potato, rehydration

Procedia PDF Downloads 270
332 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 177
331 A Study on the Shear-Induced Crystallization of Aliphatic-Aromatic Copolyester

Authors: Ramin Hosseinnezhad, Iurii Vozniak, Andrzej Galeski

Abstract:

Shear-induced crystallization, originated from orientation of chains along the flow direction, is an inevitable part of most polymer processing technologies. It plays a dominant role in determining the final product properties and is affected by many factors such as shear rate, cooling rate, total strain, etc. Investigation of the shear-induced crystallization process become of great importance for preparation of nanocomposite, which requires crystallization of nanofibrous sheared inclusions at higher temperatures. Thus, the effects of shear time, shear rate, and also thermal condition of cooling on crystallization of two aliphatic-aromatic copolyesters have been investigated. This was performed using Linkam optical shearing system (CSS450) for both Ecoflex® F Blend C1200 produced by BASF and synthesized copolyester of butylene terephthalate and a mixture of butylene esters: adipate, succinate, and glutarate, (PBASGT), containing 60% of aromatic comonomer. Crystallization kinetics of these biodegradable copolyesters was studied at two different conditions of shearing. First, sample with a thickness of 60µm was heated to 60˚C above its melting point and subsequently subjected to different shear rates (100–800 sec-1) while cooling with specific rates. Second, the same type of sample was cooled down when shearing at constant temperature was finished. The intensity of transmitted depolarized light, recorded by a camera attached to the optical microscope, was used as a measure to follow the crystallization. Temperature dependencies of conversion degree of samples during cooling were collected and used to determine the half-temperature (Th), at which 50% conversion degree was reached. Shearing ecoflex films for 45 seconds with a shear rate of 100 sec-1 resulted in significant increase of Th from 56˚C to 70˚C. Moreover, the temperature range for the transition of molten samples to crystallized state decreased from 42˚C to 20˚C. Comparatively low shift of 10˚C in Th towards higher temperature was observed for PBASGT films at shear rate of 600 sec-1 for 45 seconds. However, insufficient melt flow strength and non-laminar flow due to Taylor vortices was a hindrance to reach more elevated Th at very high shear rates (600–800 sec-1). The shift in Th was smaller for the samples sheared at a constant temperature and subsequently cooled down. This may be attributed to the longer time gap between cessation of shearing and the onset of crystallization. The longer this time gap, the more possibility for crystal nucleus to re-melt at temperatures above Tm and for polymer chains to recoil and relax. It is found that the crystallization temperature, crystallization induction time and spherulite growth of aliphatic-aromatic copolyesters are dramatically influenced by both the cooling rate and the shear imposed during the process.

Keywords: induced crystallization, shear rate, aliphatic-aromatic copolyester, ecoflex

Procedia PDF Downloads 449
330 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 205
329 MigrationR: An R Package for Analyzing Bird Migration Data Based on Satellite Tracking

Authors: Xinhai Li, Huidong Tian, Yumin Guo

Abstract:

Bird migration is fantastic natural phenomenon. In recent years, the use of GPS transmitters has generated a vast amount of data, and the Movebank platform has made these data publicly accessible. For researchers, what they need are data analysis tools. Although there are approximately 90 R packages dedicated to animal movement analysis, the capacity for comprehensive processing of bird migration data remains limited. Hence, we introduce a novel package called migrationR. This package enables the calculation of movement speed, direction, changes in direction, flight duration, daily and annual movement distances. Furthermore, it can pinpoint the starting and ending dates of migration, estimate nest site locations and stopovers, and visualize movement trajectories at various time scales. migrationR distinguishes individuals through NMDS (non-metric multidimensional scaling) coordinates based on movement variables such as speed, flight duration, path tortuosity, and migration timing. A distinctive aspect of the package is the development of a hetero-occurrences species distribution model that takes into account the daily rhythm of individual birds across different landcover types. Habitat use for foraging and roosting differs significantly for many waterbirds. For example, White-naped Cranes at Poyang Lake in China typically forage in croplands and roost in shallow water areas. Both of these occurrence types are of equal importance. Optimal habitats consist of a combination of crop lands and shallow waters, whereas suboptimal habitats lack both, which necessitates birds to fly extensively. With migrationR, we conduct species distribution modeling for foraging and roosting separately and utilize the moving distance between crop lands and shallow water areas as an index of overall habitat suitability. This approach offers a more nuanced understanding of the habitat requirements for migratory birds and enhances our ability to analyze and interpret their movement patterns effectively. The functions of migrationR are demonstrated using our own tracking data of 78 White-naped Crane individuals from 2014 to 2023, comprising over one million valid locations in total. migrationR can be installed from a GitHub repository by executing the following command: remotes::install_github("Xinhai-Li/migrationR").

Keywords: bird migration, hetero-occurrences species distribution model, migrationR, R package, satellite telemetry

Procedia PDF Downloads 65
328 Mapping Forest Biodiversity Using Remote Sensing and Field Data in the National Park of Tlemcen (Algeria)

Authors: Bencherif Kada

Abstract:

In forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects, and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction and area of an object, etc.) and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants and bare soils. Texture attributes seem to provide no useful information while spatial attributes of shape, compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, biodiversity, shrublands

Procedia PDF Downloads 33
327 Correlation Between Cytokine Levels and Lung Injury in the Syrian Hamster (Mesocricetus Auratus) Covid-19 Model

Authors: Gleb Fomin, Kairat Tabynov, Nurkeldy Turebekov, Dinara Turegeldiyeva, Rinat Islamov

Abstract:

The level of major cytokines in the blood of patients with COVID-19 varies greatly depending on age, gender, duration and severity of infection, and comorbidity. There are two clinically significant cytokines, IL-6 and TNF-α, which increase in levels in patients with severe COVID-19. However, in a model of COVID-19 in hamsters, TNF-α levels are unchanged or reduced, while the expression of other cytokines reflects the profile of cytokines found in patients’ plasma. The aim of our study was to evaluate the relationship between the level of cytokines in the blood, lungs, and lung damage in the model of the Syrian hamster (Mesocricetus auratus) infected with the SARS-CoV-2 strain. The study used outbred female and male Syrian hamsters (n=36, 4 groups) weighing 80-110 g and 5 months old (protocol IACUC, #4, 09/22/2020). Animals were infected intranasally with the hCoV-19/Kazakhstan/KazNAU-NSCEDI-481/2020 strain and euthanized at 3 d.p.i. The level of cytokines IL-6, TNF-α, IFN-α, and IFN-γ was determined by ELISA MyBioSourse (USA) for hamsters. Lung samples were subjected to histological processing. The presence of pathological changes in histological preparations was assessed on a 3-point scale. The work was carried out in the ABSL-3 laboratory. The data were analyzed in GraphPad Prism 6.00 (GraphPad Software, La Jolla, California, USA). The work was supported by the MES RK grant (AP09259865). In the blood, the level of TNF-α increased in males (p=0.0012) and IFN-γ in males and females (p=0.0001). On the contrary, IFN-α production decreased (p=0.0006). Only TNF-α level increased in lung tissues (p=0.0011). Correlation analysis showed a negative relationship between the level of IL-6 in the blood and lung damage in males (r -0.71, p=0.0001) and females (r-0.57, p=0.025). On the contrary, in males, the level of IL-6 in the lungs and score is positively correlated (r 0.80, p=0.01). The level of IFN-γ in the blood (r -0.64, p=0.035) and lungs (r-0.72, p=0.017) in males has a negative correlation with lung damage. No links were found for TNF-α and IFN-α. The study showed a positive association between lung injury and tissue levels of IL-6 in male hamsters. It is known that in humans, high concentrations of IL-6 in the lungs are associated with suppression of cellular immunity and, as a result, with an increase in the severity of COVID-19. TNF-α and IFN-γ play a key role in the pathogenesis of COVID-19 in hamsters. However, the mechanisms of their activity require more detailed study. IFN-α plays a lesser role in direct lung injury in a Syrian hamster model. We have shown the significance of tissue IL-6 and IFN-γ as predictors of the severity of lung damage in COVID-19 in the Syrian hamster model. Changes in the level of cytokines in the blood may not always reflect pathological processes in the lungs with COVID-19.

Keywords: syrian hamster, COVID-19, cytokines, biological model

Procedia PDF Downloads 93
326 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 116
325 Progress Toward More Resilient Infrastructures

Authors: Amir Golalipour

Abstract:

In recent years, resilience emerged as an important topic in transportation infrastructure practice, planning, and design to address the myriad stressors of future climate facing the Nation. Climate change has increased the frequency of extreme weather events and also causes climate and weather patterns to diverge from historic trends, culminating in circumstances where transportation infrastructure and assets are operating outside the scope of their design. To design and maintain transportation infrastructure that can continue meeting objectives over the infrastructure’s design life, these systems must be made adaptable to the changing climate by incorporating resilience wherever practically and financially feasible. This study is focused on the adaptation strategies and incorporation of resilience in infrastructure construction, maintenance, rehabilitation, and preservation processes. This study will include highlights from some of the recent FHWA activities on resilience. This study describes existing resilience planning and decision-making practices related to transportation infrastructure; mechanisms to identify, analyze, and prioritize adaptation options; and the strain that future climate and extreme weather event pressures place on existing transportation assets and the stressors these systems face for both single and combined stressor scenarios. Results of two case studies from Transportation Engineering Approaches to Climate Resiliency (TEACR) projects with focus on temperature and precipitation impacts on transportation infrastructures will be presented. These case studies looked at the impact of infrastructure performance using future temperature and precipitation compared to traditional climate design parameters. The research team used the adaptation decision making assessment and Coupled Model Intercomparison Project (CMIP) processing tool to determine which solution is best to pursue. The CMIP tool provided project climate data for temperature and precipitation which then could be incorporated into the design procedure to estimate the performance. As a result, using the future climate scenarios would impact the design. These changes were noted to have only a slight increase in costs, however it is acknowledged that network wide these costs could be significant. This study will also focus on what we have learned from recent storms, floods, and climate related events that will help us be better prepared to ensure our communities have a resilient transportation network. It should be highlighted that standardized mechanisms to incorporate resilience practices are required to encourage widespread implementation, mitigate the effects of climate stressors, and ensure the continuance of transportation systems and assets in an evolving climate.

Keywords: adaptation strategies, extreme events, resilience, transportation infrastructure

Procedia PDF Downloads 9
324 Occult Haemolacria Paradigm in the Study of Tears

Authors: Yuliya Huseva

Abstract:

To investigate the contents of tears to determine latent blood. Methods: Tear samples from 72 women were studied with the microscopy of tears aspirated with a capillary and stained by Nocht and with a chemical method of test strips with chromogen. Statistical data processing was carried out using statistical packages Statistica 10.0 for Windows, calculation of Pearson's chi-square test, Yule association coefficient, the method of determining sensitivity and specificity. Results:, In 30.6% (22) of tear samples erythrocytes were revealed microscopically. Correlations between the presence of erythrocytes in the tear and the phase of the menstrual cycle has been discovered. In the follicular phase of the cycle, erythrocytes were found in 59.1% (13) people, which is significantly more (x2=4.2, p=0.041) compared to the luteal phase - in 40.9% (9) women. In the first seven days of the follicular phase of the menstrual cycle the erythrocytes were predominanted of in the tears of women examined testifies in favour of the vicarious bleeding from the mucous membranes of extragenital organs in sync with menstruation. Of the other cellular elements in tear samples with latent haemolacria, neutrophils prevailed - in 45.5% (10), while lymphocytes were less common - in 27.3% (6), because neutrophil exudation is accompanied by vasodilatation of the conjunctiva and the release of erythrocytes into the conjunctival cavity. It was found that the prognostic significance of the chemical method was 0.53 of the microscopic method. In contrast to microscopy, which detected blood in tear samples from 30.6% (22) of women, blood was detected chemically in tears of 16.7% (12). An association between latent haemolacria and endometriosis was found (k=0.75, p≤0.05). Microscopically, in the tears of patients with endometriosis, erythrocytes were detected in 70% of cases, while in healthy women without endometriosis - in 25% of cases. The proportion of women with erythrocytes in tears, determined by a chemical method, was 41.7% among patients with endometriosis, which is significantly more (x2=6.5, p=0.011) than 11.7% among women without endometriosis. The data obtained can be explained by the etiopathogenesis of the extragenital endometriosis which is caused by hematogenous spread of endometrial tissue into the orbit. In endometriosis, erythrocytes are found against the background of accumulations of epithelial cells. In the tear samples of 4 women with endometriosis, glandular cuboidal epithelial cells, morphologically similar to endometrial cells, were found, which may indicate a generalization of the disease. Conclusions: Single erythrocytes can normally be found in the tears, their number depends on the phase of the menstrual cycle, increasing in the follicular phase. Erythrocytes found in tears against the background of accumulations of epitheliocytes and their glandular atypia may indicate a manifestation of extragenital endometriosis. Both used methods (microscopic and chemical) are informative in revealing latent haemolacria. The microscopic method is more sensitive, reveals intact erythrocytes, and besides, it provides information about other cells. At the same time, the chemical method is faster and technically simpler, it determines the presence of haemoglobin and its metabolic products, and can be used as a screening.

Keywords: tear, blood, microscopy, epitheliocytes

Procedia PDF Downloads 121
323 EverPro as the Missing Piece in the Plant Protein Portfolio to Aid the Transformation to Sustainable Food Systems

Authors: Aylin W Sahin, Alice Jaeger, Laura Nyhan, Gregory Belt, Steffen Münch, Elke K. Arendt

Abstract:

Our current food systems cause an increase in malnutrition resulting in more people being overweight or obese in the Western World. Additionally, our natural resources are under enormous pressure and the greenhouse gas emission increases yearly with a significant contribution to climate change. Hence, transforming our food systems is of highest priority. Plant-based food products have a lower environmental impact compared to their animal-based counterpart, representing a more sustainable protein source. However, most plant-based protein ingredients, such as soy and pea, are lacking indispensable amino acids and extremely limited in their functionality and, thus, in their food application potential. They are known to have a low solubility in water and change their properties during processing. The low solubility displays the biggest challenge in the development of milk alternatives leading to inferior protein content and protein quality in dairy alternatives on the market. Moreover, plant-based protein ingredients often possess an off-flavour, which makes them less attractive to consumers. EverPro, a plant-protein isolate originated from Brewer’s Spent Grain, the most abundant by-product in the brewing industry, represents the missing piece in the plant protein portfolio. With a protein content of >85%, it is of high nutritional value, including all indispensable amino acids which allows closing the protein quality gap of plant proteins. Moreover, it possesses high techno-functional properties. It is fully soluble in water (101.7 ± 2.9%), has a high fat absorption capacity (182.4 ± 1.9%), and a foaming capacity which is superior to soy protein or pea protein. This makes EverPro suitable for a vast range of food applications. Furthermore, it does not cause changes in viscosity during heating and cooling of dispersions, such as beverages. Besides its outstanding nutritional and functional characteristics, the production of EverPro has a much lower environmental impact compared to dairy or other plant protein ingredients. Life cycle assessment analysis showed that EverPro has the lowest impact on global warming compared to soy protein isolate, pea protein isolate, whey protein isolate, and egg white powder. It also contributes significantly less to freshwater eutrophication, marine eutrophication and land use compared the protein sources mentioned above. EverPro is the prime example of sustainable ingredients, and the type of plant protein the food industry was waiting for: nutritious, multi-functional, and environmentally friendly.

Keywords: plant-based protein, upcycled, brewers' spent grain, low environmental impact, highly functional ingredient

Procedia PDF Downloads 80
322 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 182
321 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 112