Search results for: complex fluid flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10630

Search results for: complex fluid flow

490 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 92
489 Highly Selective Phosgene Free Synthesis of Methylphenylcarbamate from Aniline and Dimethyl Carbonate over Heterogeneous Catalyst

Authors: Nayana T. Nivangune, Vivek V. Ranade, Ashutosh A. Kelkar

Abstract:

Organic carbamates are versatile compounds widely employed as pesticides, fungicides, herbicides, dyes, pharmaceuticals, cosmetics and in the synthesis of polyurethanes. Carbamates can be easily transformed into isocyanates by thermal cracking. Isocyantes are used as precursors for manufacturing agrochemicals, adhesives and polyurethane elastomers. Manufacture of polyurethane foams is a major application of aromatic ioscyanates and in 2007 the global consumption of polyurethane was about 12 million metric tons/year and the average annual growth rate was about 5%. Presently Isocyanates/carbamates are manufactured by phosgene based process. However, because of high toxicity of phoegene and formation of waste products in large quantity; there is a need to develop alternative and safer process for the synthesis of isocyanates/carbamates. Recently many alternative processes have been investigated and carbamate synthesis by methoxycarbonylation of aromatic amines using dimethyl carbonate (DMC) as a green reagent has emerged as promising alternative route. In this reaction methanol is formed as a by-product, which can be converted to DMC either by oxidative carbonylation of methanol or by reacting with urea. Thus, the route based on DMC has a potential to provide atom efficient and safer route for the synthesis of carbamates from DMC and amines. Lot of work is being carried out on the development of catalysts for this reaction and homogeneous zinc salts were found to be good catalysts for the reaction. However, catalyst/product separation is challenging with these catalysts. There are few reports on the use of supported Zn catalysts; however, deactivation of the catalyst is the major problem with these catalysts. We wish to report here methoxycarbonylation of aniline to methylphenylcarbamate (MPC) using amino acid complexes of Zn as highly active and selective catalysts. The catalysts were characterized by XRD, IR, solid state NMR and XPS analysis. Methoxycarbonylation of aniline was carried out at 170 °C using 2.5 wt% of the catalyst to achieve >98% conversion of aniline with 97-99% selectivity to MPC as the product. Formation of N-methylated products in small quantity (1-2%) was also observed. Optimization of the reaction conditions was carried out using zinc-proline complex as the catalyst. Selectivity was strongly dependent on the temperature and aniline:DMC ratio used. At lower aniline:DMC ratio and at higher temperature, selectivity to MPC decreased (85-89% respectively) with the formation of N-methylaniline (NMA), N-methyl methylphenylcarbamate (MMPC) and N,N-dimethyl aniline (NNDMA) as by-products. Best results (98% aniline conversion with 99% selectivity to MPC in 4 h) were observed at 170oC and aniline:DMC ratio of 1:20. Catalyst stability was verified by carrying out recycle experiment. Methoxycarbonylation preceded smoothly with various amine derivatives indicating versatility of the catalyst. The catalyst is inexpensive and can be easily prepared from zinc salt and naturally occurring amino acids. The results are important and provide environmentally benign route for MPC synthesis with high activity and selectivity.

Keywords: aniline, heterogeneous catalyst, methoxycarbonylation, methylphenyl carbamate

Procedia PDF Downloads 276
488 Making Sense of C. G. Jung’s Red Book and Black Books: Masonic Rites and Trauma

Authors: Lynn Brunet

Abstract:

In 2019 the author published a book-length study examining Jung’s Red Book. This study consisted of a close reading of each of the chapters in Liber Novus, focussing on the fantasies themselves and Jung’s accompanying paintings. It found that the plots, settings, characters and symbolism in each of these fantasies are not entirely original but remarkably similar to those found in some of the higher degrees of Continental Freemasonry. Jung was the grandson of his namesake, C.G. Jung (1794–1864), who was a Freemason and one-time Grand Master of the Swiss Masonic Lodge. The study found that the majority of Jung’s fantasies are very similar to those of the Ancient and Accepted Scottish Rite, practiced in Switzerland during the time of Jung’s childhood. It argues that the fantasies appear to be memories of a series of terrifying initiatory ordeals conducted using spurious versions of the Masonic rites. Spurious Freemasonry is a term that Masons use for the ‘irregular’ or illegitimate use of the rituals and are not sanctioned by the Order. Since the 1980s there have been multiple reports of ritual trauma amongst a wide variety of organizations, cults and religious groups that psychologists, counsellors, social workers, and forensic scientists have confirmed. The abusive use of Masonic rites features frequently in these reports. This initial study allows a reading of The Red Book that makes sense of the obscure references, bizarre scenarios and intense emotional trauma described by Jung throughout Liber Novus. It suggests that Jung appears to have undergone a cruel initiatory process as a child. The author is currently examining the extra material found in Jung’s Black Books and the results are confirming the original discoveries and demonstrating a number of aspects not covered in the first publication. These include the complex layering of ancient gods and belief systems in answer to Jung’s question, ‘In which underworld am I?’ It demonstrates that the majority of these ancient systems and their gods are discussed in a handbook for the Scottish Rite, Morals and Dogma by Albert Pike, but that the way they are presented by Philemon and his soul is intended to confuse him rather than clarify their purpose. This new study also examines Jung’s soul’s question ‘I am not a human being. What am I then?’ While further themes that emerge from the Black Books include his struggle with vanity and whether he should continue creating his ‘holy book’; and a comparison between Jung’s ‘mystery plays’ and examples from the Theatre of the Absurd. Overall, it demonstrates that Jung’s experience, while inexplicable in his own time, is now known to be the secret and abusive practice of initiation of the young found in a range of cults and religious groups in many first world countries. This paper will present a brief outline of the original study and then examine the themes that have emerged from the extra material found in the Black Books.

Keywords: C. G. Jung, the red book, the black books, masonic themes, trauma and dissociation, initiation rites, secret societies

Procedia PDF Downloads 138
487 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain

Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero

Abstract:

Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.

Keywords: environmental externalities, intermodal transport, perishable food, transit time

Procedia PDF Downloads 98
486 Understanding the Lithiation/Delithiation Mechanism of Si₁₋ₓGeₓ Alloys

Authors: Laura C. Loaiza, Elodie Salager, Nicolas Louvain, Athmane Boulaoued, Antonella Iadecola, Patrik Johansson, Lorenzo Stievano, Vincent Seznec, Laure Monconduit

Abstract:

Lithium-ion batteries (LIBs) have an important place among energy storage devices due to their high capacity and good cyclability. However, the advancements in portable and transportation applications have extended the research towards new horizons, and today the development is hampered, e.g., by the capacity of the electrodes employed. Silicon and germanium are among the considered modern anode materials as they can undergo alloying reactions with lithium while delivering high capacities. It has been demonstrated that silicon in its highest lithiated state can deliver up to ten times more capacity than graphite (372 mAh/g): 4200 mAh/g for Li₂₂Si₅ and 3579 mAh/g for Li₁₅Si₄, respectively. On the other hand, germanium presents a capacity of 1384 mAh/g for Li₁₅Ge₄, and a better electronic conductivity and Li ion diffusivity as compared to Si. Nonetheless, the commercialization potential of Ge is limited by its cost. The synergetic effect of Si₁₋ₓGeₓ alloys has been proven, the capacity is increased compared to Ge-rich electrodes and the capacity retention is increased compared to Si-rich electrodes, but the exact performance of this type of electrodes will depend on factors like specific capacity, C-rates, cost, etc. There are several reports on various formulations of Si₁₋ₓGeₓ alloys with promising LIB anode performance with most work performed on complex nanostructures resulting from synthesis efforts implying high cost. In the present work, we studied the electrochemical mechanism of the Si₀.₅Ge₀.₅ alloy as a realistic micron-sized electrode formulation using carboxymethyl cellulose (CMC) as the binder. A combination of a large set of in situ and operando techniques were employed to investigate the structural evolution of Si₀.₅Ge₀.₅ during lithiation and delithiation processes: powder X-ray diffraction (XRD), X-ray absorption spectroscopy (XAS), Raman spectroscopy, and 7Li solid state nuclear magnetic resonance spectroscopy (NMR). The results have presented a whole view of the structural modifications induced by the lithiation/delithiation processes. The Si₀.₅Ge₀.₅ amorphization was observed at the beginning of discharge. Further lithiation induces the formation of a-Liₓ(Si/Ge) intermediates and the crystallization of Li₁₅(Si₀.₅Ge₀.₅)₄ at the end of the discharge. At really low voltages a reversible process of overlithiation and formation of Li₁₅₊δ(Si₀.₅Ge₀.₅)₄ was identified and related with a structural evolution of Li₁₅(Si₀.₅Ge₀.₅)₄. Upon charge, the c-Li₁₅(Si₀.₅Ge₀.₅)₄ was transformed into a-Liₓ(Si/Ge) intermediates. At the end of the process an amorphous phase assigned to a-SiₓGey was recovered. Thereby, it was demonstrated that Si and Ge are collectively active along the cycling process, upon discharge with the formation of a ternary Li₁₅(Si₀.₅Ge₀.₅)₄ phase (with a step of overlithiation) and upon charge with the rebuilding of the a-Si-Ge phase. This process is undoubtedly behind the enhanced performance of Si₀.₅Ge₀.₅ compared to a physical mixture of Si and Ge.

Keywords: lithium ion battery, silicon germanium anode, in situ characterization, X-Ray diffraction

Procedia PDF Downloads 286
485 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication

Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons

Abstract:

The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.

Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture

Procedia PDF Downloads 168
484 Advanced Statistical Approaches for Identifying Predictors of Poor Blood Pressure Control: A Comprehensive Analysis Using Multivariable Logistic Regression and Generalized Estimating Equations (GEE)

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Effective management of hypertension remains a critical public health challenge, particularly among racially and ethnically diverse populations. This study employs sophisticated statistical models to rigorously investigate the predictors of poor blood pressure (BP) control, with a specific focus on demographic, socioeconomic, and clinical risk factors. Leveraging a large sample of 19,253 adults drawn from the National Health and Nutrition Examination Survey (NHANES) across three distinct time periods (2013-2014, 2015-2016, and 2017-2020), we applied multivariable logistic regression and generalized estimating equations (GEE) to account for the clustered structure of the data and potential within-subject correlations. Our multivariable models identified significant associations between poor BP control and several key predictors, including race/ethnicity, age, gender, body mass index (BMI), prevalent diabetes, and chronic kidney disease (CKD). Non-Hispanic Black individuals consistently exhibited higher odds of poor BP control across all periods (OR = 1.99; 95% CI: 1.69, 2.36 for the overall sample; OR = 2.33; 95% CI: 1.79, 3.02 for 2017-2020). Younger age groups demonstrated substantially lower odds of poor BP control compared to individuals aged 75 and older (OR = 0.15; 95% CI: 0.11, 0.20 for ages 18-44). Men also had a higher likelihood of poor BP control relative to women (OR = 1.55; 95% CI: 1.31, 1.82), while BMI ≥35 kg/m² (OR = 1.76; 95% CI: 1.40, 2.20) and the presence of diabetes (OR = 2.20; 95% CI: 1.80, 2.68) were associated with increased odds of poor BP management. Further analysis using GEE models, accounting for temporal correlations and repeated measures, confirmed the robustness of these findings. Notably, individuals with chronic kidney disease displayed markedly elevated odds of poor BP control (OR = 3.72; 95% CI: 3.09, 4.48), with significant differences across the survey periods. Additionally, higher education levels and better self-reported diet quality were associated with improved BP control. College graduates exhibited a reduced likelihood of poor BP control (OR = 0.64; 95% CI: 0.46, 0.89), particularly in the 2015-2016 period (OR = 0.48; 95% CI: 0.28, 0.84). Similarly, excellent dietary habits were associated with significantly lower odds of poor BP control (OR = 0.64; 95% CI: 0.44, 0.94), underscoring the importance of lifestyle factors in hypertension management. In conclusion, our findings provide compelling evidence of the complex interplay between demographic, clinical, and socioeconomic factors in predicting poor BP control. The application of advanced statistical techniques such as GEE enhances the reliability of these results by addressing the correlated nature of repeated observations. This study highlights the need for targeted interventions that consider racial/ethnic disparities, clinical comorbidities, and lifestyle modifications in improving BP control outcomes.

Keywords: hypertension, blood pressure, NHANES, generalized estimating equations

Procedia PDF Downloads 16
483 Ensemble of Misplacement, Juxtaposing Feminine Identity in Time and Space: An Analysis of Works of Modern Iranian Female Photographers

Authors: Delaram Hosseinioun

Abstract:

In their collections, Shirin Neshat, Mitra Tabrizian, Gohar Dashti and Newsha Tavakolian adopt a hybrid form of narrative to confront the restrictions imposed on women in hegemonic public and private spaces. Focusing on motives such as social marginalisation, crisis of belonging, as well as lack of agency for women, the artists depict the regression of women’s rights in their respective generations. Based on the ideas of Michael Bakhtin, namely his concept of polyphony or the plurality of contradictory voices, the views of Judith Butler on giving an account to oneself and Henri Leverbre’s theories on social space, this study illustrates the artists’ concept of identity in crisis through time and space. The research explores how the artists took their art as a novel dimension to depict and confront the hardships imposed on Iranian women. Henri Lefebvre makes a distinction between complex social structures through which individuals situate, perceive and represent themselves. By adding Bakhtin’s polyphonic view to Lefebvre’s concepts of perceived and lived spaces, the study explores the sense of social fragmentation in the works of Dashti and Tavakolian. One argument is that as the representatives of the contemporary generation of female artists who spend their lives in Iran and faced a higher degree of restrictions, their hyperbolic and theatrical styles stand as a symbolic act of confrontation against restrictive socio-cultural norms imposed on women. Further, the research explores the possibility of reclaiming one's voice and sense of agency through art, corresponding with the Bakhtinian sense of polyphony and Butler’s concept of giving an account to oneself. Works of Neshat and Tabrizian as the representatives of the previous generation who faced exile and diaspora, encompass a higher degree of misplacement, violence and decay of women’s presence. In Their works, the women’s body encompasses Lefebvre’s dismantled temporal and special setting. Notably, the ongoing social conviction and gender-based dogma imposed on women frame some of the concurrent motives among the selected collections of the four artists. By applying an interdisciplinary lens and integrating the conducted interviews with the artists, the study illustrates how the artists seek a transcultural account for themselves and women in their generations. Further, the selected collections manifest the urgency for an authentic and liberal voice and setting for women, resonating with the concurrent Women, Life, Freedom movement in Iran.

Keywords: persian modern female photographers, transcultural studies, shirin neshat, mitra tabrizian, gohar dashti, newsha tavakolian, butler, bakhtin, lefebvre

Procedia PDF Downloads 78
482 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 133
481 Comparative Review of Models for Forecasting Permanent Deformation in Unbound Granular Materials

Authors: Shamsulhaq Amin

Abstract:

Unbound granular materials (UGMs) are pivotal in ensuring long-term quality, especially in the layers under the surface of flexible pavements and other constructions. This study seeks to better understand the behavior of the UGMs by looking at popular models for predicting lasting deformation under various levels of stresses and load cycles. These models focus on variables such as the number of load cycles, stress levels, and features specific to materials and were evaluated on the basis of their ability to accurately predict outcomes. The study showed that these factors play a crucial role in how well the models work. Therefore, the research highlights the need to look at a wide range of stress situations to more accurately predict how much the UGMs bend or shift. The research looked at important factors, like how permanent deformation relates to the number of times a load is applied, how quickly this phenomenon happens, and the shakedown effect, in two different types of UGMs: granite and limestone. A detailed study was done over 100,000 load cycles, which provided deep insights into how these materials behave. In this study, a number of factors, such as the level of stress applied, the number of load cycles, the density of the material, and the moisture present were seen as the main factors affecting permanent deformation. It is vital to fully understand these elements for better designing pavements that last long and handle wear and tear. A series of laboratory tests were performed to evaluate the mechanical properties of materials and acquire model parameters. The testing included gradation tests, CBR tests, and Repeated load triaxial tests. The repeated load triaxial tests were crucial for studying the significant components that affect deformation. This test involved applying various stress levels to estimate model parameters. In addition, certain model parameters were established by regression analysis, and optimization was conducted to improve outcomes. Afterward, the material parameters that were acquired were used to construct graphs for each model. The graphs were subsequently compared to the outcomes obtained from the repeated load triaxial testing. Additionally, the models were evaluated to determine if they demonstrated the two inherent deformation behaviors of materials when subjected to repetitive load: the initial phase, post-compaction, and the second phase volumetric changes. In this study, using log-log graphs was key to making the complex data easier to understand. This method made the analysis clearer and helped make the findings easier to interpret, adding both precision and depth to the research. This research provides important insight into picking the right models for predicting how these materials will act under expected stress and load conditions. Moreover, it offers crucial information regarding the effect of load cycle and permanent deformation as well as the shakedown effect on granite and limestone UGMs.

Keywords: permanent deformation, unbound granular materials, load cycles, stress level

Procedia PDF Downloads 43
480 Cockpit Integration and Piloted Assessment of an Upset Detection and Recovery System

Authors: Hafid Smaili, Wilfred Rouwhorst, Paul Frost

Abstract:

The trend of recent accident and incident cases worldwide show that the state-of-the-art automation and operations, for current and future demanding operational environments, does not provide the desired level of operational safety under crew peak workload conditions, specifically in complex situations such as loss-of-control in-flight (LOC-I). Today, the short term focus is on preparing crews to recognise and handle LOC-I situations through upset recovery training. This paper describes the cockpit integration aspects and piloted assessment of both a manually assisted and automatic upset detection and recovery system that has been developed and demonstrated within the European Advanced Cockpit for Reduction Of StreSs and workload (ACROSS) programme. The proposed system is a function that continuously monitors and intervenes when the aircraft enters an upset and provides either manually pilot-assisted guidance or takes over full control of the aircraft to recover from an upset. In order to mitigate the highly physical and psychological impact during aircraft upset events, the system provides new cockpit functionalities to support the pilot in recovering from any upset both manually assisted and automatically. A piloted simulator assessment was made in Oct-Nov 2015 using ten pilots in a representative civil large transport fly-by-wire aircraft in terms of the preference of the tested upset detection and recovery system configurations to reduce pilot workload, increase situational awareness and safe interaction with the manually assisted or automated modes. The piloted simulator evaluation of the upset detection and recovery system showed that the functionalities of the system are able to support pilots during an upset. The experiment showed that pilots are willing to rely on the guidance provided by the system during an upset. Thereby, it is important for pilots to see and understand what the aircraft is doing and trying to do especially in automatic modes. Comparing the manually assisted and the automatic recovery modes, the pilot’s opinion was that an automatic recovery reduces the workload so that they could perform a proper screening of the primary flight display. The results further show that the manually assisted recoveries, with recovery guidance cues on the cockpit primary flight display, reduced workload for severe upsets compared to today’s situation. The level of situation awareness was improved for automatic upset recoveries where the pilot could monitor what the system was trying to accomplish compared to automatic recovery modes without any guidance. An improvement in situation awareness was also noticeable with the manually assisted upset recovery functionalities as compared to the current non-assisted recovery procedures. This study shows that automatic upset detection and recovery functionalities are likely to positively impact the operational safety by means of reduced workload, improved situation awareness and crew stress reduction. It is thus believed that future developments for upset recovery guidance and loss-of-control prevention should focus on automatic recovery solutions.

Keywords: aircraft accidents, automatic flight control, loss-of-control, upset recovery

Procedia PDF Downloads 210
479 Implementation of an Online-Platform at the University of Freiburg to Help Medical Students Cope with Stress

Authors: Zoltán Höhling, Sarah-Lu Oberschelp, Niklas Gilsdorf, Michael Wirsching, Andrea Kuhnert

Abstract:

A majority of medical students at the University of Freiburg reported stress-related psychosomatic symptoms which are often associated with their studies. International research supports these findings, as medical students worldwide seem to be at special risk for mental health problems. In some countries and institutions, psychologically based interventions that assist medical students in coping with their stressors have been implemented. It turned out that anonymity is an important aspect here. Many students fear a potential damage of reputation when being associated with mental health problems, which may be due to a high level of competitiveness in classes. Therefore, we launched an online-platform where medical students could anonymously seek help and exchange their experiences with fellow students and experts. Medical students of all semesters have access to it through the university’s learning management system (called “ILIAS”). The informative part of the platform consists of exemplary videos showing medical students (actors) who act out scenes that demonstrate the antecedents of stress-related psychosomatic disorders. These videos are linked to different expert comments, describing the exhibited symptoms in an understandable and normalizing way. The (inter-)active part of the platform consists of self-help tools (such as meditation exercises or general tips for stress-coping) and an anonymous interactive forum where students can describe their stress-related problems and seek guidance from experts and/or share their experiences with fellow students. Besides creating an immediate proposal to help affected students, we expect that competitiveness between students might be diminished and bondage improved through mutual support between them. In the initial phase after the platform’s launch, it was accessed by a considerable number of medical students. On a closer look it appeared that platform sections like general information on psychosomatic-symptoms and self-treatment tools were accessed far more often than the online-forum during the first months after the platform launch. Although initial acceptance of the platform was relatively high, students showed a rather passive way of using our platform. While user statistics showed a clear demand for information on stress-related psychosomatic symptoms and its possible remedies, active engagement in the interactive online-forum was rare. We are currently advertising the platform intensively and trying to point out the assured anonymity of the platform and its interactive forum. Our plans, to assure students their anonymity through the use of an e-learning facility and promote active engagement in the online forum, did not (yet) turn out as expected. The reasons behind this may be manifold and based on either e-learning related issues or issues related to students’ individual needs. Students might, for example, question the assured anonymity due to a lack of trust in the technological functioning university’s learning management system. However, one may also conclude that reluctance to discuss stress-related psychosomatic symptoms with peer medical students may not be solely based on anonymity concerns, but could be rooted in more complex issues such as general mistrust between students.

Keywords: e-tutoring, stress-coping, student support, online forum

Procedia PDF Downloads 386
478 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets

Authors: Basiru Amuneni

Abstract:

Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.

Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality

Procedia PDF Downloads 94
477 Conditionality of Aid as a Counterproductive Factor in Peacebuilding in the Afghan Context

Authors: Karimova Sitora Yuldashevna

Abstract:

The August 2021 resurgence of Taliban as a ruling force in Afghanistan once again challenged the global community into dealing with an unprecedentedly unlike-minded government. To express their disapproval of the new regime, Western governments and intergovernmental institutions have suspended their infrastructural projects and other forms of support. Moreover, the Afghan offshore reserves were frozen, and Afghanistan was disconnected from the international financial system, which impeded even independent aid agencies’ work. The already poor provision of aid was then further complicated with political conditionality. The purpose of this paper is to investigate the efficacy of conditional aid policy in the Afghan peacebuilding under Taliban rule and provide recommendations to international donors on further course of action. Arguing that conditionality of aid is a counterproductive factor in the peacebuilding process, this paper employs scholarly literature on peacebuilding alongside reports from International non-governmental organizations INGOs who operate directly in Afghanistan. The existing debate on peacebuilding in Afghanistan revolves around aid as a means of building democratic foundation for achieving peace on communal and national levels and why the previous attempts to do so were unsuccessful. This paper focuses on how to recalibrate the approach to aid provision and peacebuilding in the new reality. In the early 2000s, amid the weak Post-Cold War international will for a profound engagement in the conflict, humanitarian and development aid became the new means of achieving peace. Aid agencies provided resources directly to communities, minimizing the risk of local disputes. Through subsidizing education, governance reforms, and infrastructural projects, international aid accelerated school enrollment, introduced peace education, funded provincial council and parliamentary elections, and helped rebuild a conflict-torn country.When the Taliban seized power, the international community called on them to build an inclusive government based on respect for human rights, particularly girls’ and women’s schooling and work, as a condition to retain the aid flow. As the Taliban clearly failed to meet the demands, development aid was withdrawn. Some key United Nation agencies also refrained from collaborating with the de-facto authorities. However, contrary to the intended change in Talibs’ behavior, such a move has only led to further deprivation of those whom the donors strived to protect. This is because concern for civilians has always been the second priority for the warring parties. This paper consists of four parts. First, it describes the scope of the humanitarian crisis that began in Afghanistan in 2001. Second, it examines the previous peacebuilding attempts undertaken by the international community and the contribution that the international aid had in the peacebuilding process. Third, the paper describes the current regime and its relationships with the international donors. Finally, the paper concludes with recommendations for donors who would have to be more realistic and reconsider their priorities. While it is certainly not suggested that the Taliban regime is legitimized internationally, the crisis calls upon donors to be more flexible in collaborating with the de-facto authorities for the sake of the civilians.

Keywords: Afghanistan, international aid, donors, peacebuilding

Procedia PDF Downloads 87
476 Techno-Economic Analysis (TEA) of Circular Economy Approach in the Valorisation of Pig Meat Processing Wastes

Authors: Ribeiro A., Vilarinho C., Luisa A., Carvalho J

Abstract:

The pig meat industry generates large volumes of by- and co-products like blood, bones, skin, trimmings, organs, viscera, and skulls, among others, during slaughtering and meat processing and must be treated and disposed of ecologically. The yield of these by-products has been reported to account for about 10% to 15% of the value of the live animal in developed countries, although animal by-products account for about two-thirds of the animal after slaughter. It was selected for further valorization of the principal wastes produced throughout the value chain of pig meat production: Pig Manure, Pig Bones, Fats, Skins, Pig Hair, Wastewater, Wastewater sludges, and other animal subproducts type III. According to the potential valorization options, these wastes will be converted into Biomethane, Fertilizers (phosphorus and digestate), Hydroxyapatite, and protein hydrolysates (Keratin and Collagen). This work includes comprehensive technical and economic analyses (TEA) for each valorization route or applied technology. Metrics such as Net Present Value (NPV), Internal Rate of Return (IRR), and payback periods were used to evaluate economic feasibility. From this analysis, it can be concluded that, for Biogas Production, the scenarios using pig manure, wastewater sludges and mixed grass and leguminous wastes presented a remarkably high economic feasibility. Scenarios showed high economic feasibility with a positive payback period, NPV, and IRR. The optimal scenario combining pig manure with mixed grass and leguminous wastes had a payback period of 1.2 years and produced 427,6269 m³ of biomethane annually. Regarding the Chemical Extraction of Phosphorous and Nitrogen, results proved that the process is economically unviable due to negative cash flows despite high recovery rates. The TEA of Hydrolysis and Extraction of Keratin Hydrolysates indicate that a unit processing and valorizing 10 tons of pig hair per year for the production of keratin hydrolysate has an NPV of 907,940 €, an IRR of 13.07%, and a Payback period of 5.41 years. All of these indicators suggest a highly potential project to explore in the future. On the opposite, the results of Hydrolysis and Extraction of Collagen Hydrolysates showed a process economically unviable with negative cash flows in all scenarios due to the high-fat content in raw materials. In fact, the results from the valorization of 10 tons of pig skin had a negative cash flow of 453 743,88 €. TEA results of Extraction and purification of Hydroxyapatite from Pig Bones with Pyrolysis indicate that unit processing and valorizing 10 tons of pig bones per year for the production of hydroxyapatite has an NPV of 1 274 819,00 €, an IRR of 65.43%, and a Payback period of 1,5 years over a timeline of 10 years with a discount rate of 10%. These valorization routes, circular economy and bio-refinery approach offer significant contributions to sustainable bio-based operations within the agri-food industry. This approach transforms waste into valuable resources, enhancing both environmental and economic outcomes and contributing to a more sustainable and circular bioeconomy.

Keywords: techno-economic analysis (TEA), pig meat processing wastes, circular economy, bio-refinery

Procedia PDF Downloads 17
475 Digital Skepticism In A Legal Philosophical Approach

Authors: dr. Bendes Ákos

Abstract:

Digital skepticism, a critical stance towards digital technology and its pervasive influence on society, presents significant challenges when analyzed from a legal philosophical perspective. This abstract aims to explore the intersection of digital skepticism and legal philosophy, emphasizing the implications for justice, rights, and the rule of law in the digital age. Digital skepticism arises from concerns about privacy, security, and the ethical implications of digital technology. It questions the extent to which digital advancements enhance or undermine fundamental human values. Legal philosophy, which interrogates the foundations and purposes of law, provides a framework for examining these concerns critically. One key area where digital skepticism and legal philosophy intersect is in the realm of privacy. Digital technologies, particularly data collection and surveillance mechanisms, pose substantial threats to individual privacy. Legal philosophers must grapple with questions about the limits of state power and the protection of personal autonomy. They must consider how traditional legal principles, such as the right to privacy, can be adapted or reinterpreted in light of new technological realities. Security is another critical concern. Digital skepticism highlights vulnerabilities in cybersecurity and the potential for malicious activities, such as hacking and cybercrime, to disrupt legal systems and societal order. Legal philosophy must address how laws can evolve to protect against these new forms of threats while balancing security with civil liberties. Ethics plays a central role in this discourse. Digital technologies raise ethical dilemmas, such as the development and use of artificial intelligence and machine learning algorithms that may perpetuate biases or make decisions without human oversight. Legal philosophers must evaluate the moral responsibilities of those who design and implement these technologies and consider the implications for justice and fairness. Furthermore, digital skepticism prompts a reevaluation of the concept of the rule of law. In an increasingly digital world, maintaining transparency, accountability, and fairness becomes more complex. Legal philosophers must explore how legal frameworks can ensure that digital technologies serve the public good and do not entrench power imbalances or erode democratic principles. Finally, the intersection of digital skepticism and legal philosophy has practical implications for policy-making. Legal scholars and practitioners must work collaboratively to develop regulations and guidelines that address the challenges posed by digital technology. This includes crafting laws that protect individual rights, ensure security, and promote ethical standards in technology development and deployment. In conclusion, digital skepticism provides a crucial lens for examining the impact of digital technology on law and society. A legal philosophical approach offers valuable insights into how legal systems can adapt to protect fundamental values in the digital age. By addressing privacy, security, ethics, and the rule of law, legal philosophers can help shape a future where digital advancements enhance, rather than undermine, justice and human dignity.

Keywords: legal philosophy, privacy, security, ethics, digital skepticism

Procedia PDF Downloads 45
474 Fucoidan: A Potent Seaweed-Derived Polysaccharide with Immunomodulatory and Anti-inflammatory Properties

Authors: Tauseef Ahmad, Muhammad Ishaq, Mathew Eapen, Ahyoung Park, Sam Karpiniec, Vanni Caruso, Rajaraman Eri

Abstract:

Fucoidans are complex, fucose-rich sulfated polymers discovered in brown seaweeds. Fucoidans are popular around the world, particularly in the nutraceutical and pharmaceutical industries, due to their promising medicinal properties. Fucoidans have been shown to have a variety of biological activities, including anti-inflammatory effects. They are known to inhibit inflammatory processes through a variety of mechanisms, including enzyme inhibition and selectin blockade. Inflammation is a part of the complicated biological response of living systems to damaging stimuli, and it plays a role in the pathogenesis of a variety of disorders, including arthritis, inflammatory bowel disease, cancer, and allergies. In the current investigation, various fucoidan extracts from Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica were assessed for inhibition of pro-inflammatory cytokine production (TNF-α, IL-1β, and IL-6) in LPS induced human macrophage cell line (THP-1) and human peripheral blood mononuclear cells (PBMCs). Furthermore, we also sought to catalogue these extracts based on their anti-inflammatory effects in the current in-vitro cell model. Materials and Methods: To assess the cytotoxicity of fucoidan extracts, MTT (3-[4,5-dimethylthiazol-2-yl]-2,5, -diphenyltetrazolium bromide) cell viability assay was performed. Furthermore, a dose-response for fucoidan extracts was performed in LPS induced THP-1 cells and PBMCs after pre-treatment for 24 hours, and levels of TNF-α, IL-1β, and IL-6 cytokines were measured using Enzyme-Linked Immunosorbent Assay (ELISA). Results: The MTT cell viability assay demonstrated that fucoidan extracts exhibited no evidence of cytotoxicity in THP-1 cells or PBMCs after 48 hours of incubation. The results of the sandwich ELISA revealed that all fucoidan extracts suppressed cytokine production in LPS-stimulated PBMCs and human THP-1 cells in a dose-dependent manner. Notably, at lower concentrations, the lower molecular fucoidan (5-30 kDa) extract from Macrocystis pyrifera was a highly efficient inhibitor of pro-inflammatory cytokines. Fucoidan extracts from all species including Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica exhibited significant anti-inflammatory effects. These findings on several fucoidan extracts provide insight into strategies for improving their efficacy against inflammation-related diseases. Conclusion: In the current research, we have successfully catalogued several fucoidan extracts based on their efficiency in LPS-induced macrophages and PBMCs in downregulating the key pro-inflammatory cytokines (TNF-, IL-1 and IL-6), which are prospective targets in human inflammatory illnesses. Further research would provide more information on the mechanism of action, allowing it to be tested for therapeutic purposes as an anti-inflammatory medication.

Keywords: fucoidan, PBMCs, THP-1, TNF-α, IL-1β, IL-6, inflammation

Procedia PDF Downloads 59
473 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes

Authors: Taylor Harris

Abstract:

Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.

Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review

Procedia PDF Downloads 86
472 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 158
471 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon

Authors: Laura M. F. Bertens

Abstract:

There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.

Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone

Procedia PDF Downloads 128
470 Information and Communication Technology Skills of Finnish Students in Particular by Gender

Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen

Abstract:

Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.

Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education

Procedia PDF Downloads 135
469 Urban Stratification as a Basis for Analyzing Political Instability: Evidence from Syrian Cities

Authors: Munqeth Othman Agha

Abstract:

The historical formation of urban centres in the eastern Arab world was shaped by rapid urbanization and sudden transformation from the age of the pre-industrial to a post-industrial economy, coupled with uneven development, informal urban expansion, and constant surges in unemployment and poverty rates. The city was stratified accordingly as overlapping layers of division and inequality that have been built on top of each other, creating complex horizontal and vertical divisions based on economic, social, political, and ethno-sectarian basis. This has been further exacerbated during the neoliberal era, which transferred the city into a sort of dual city that is inhabited by heterogeneous and often antagonistic social groups. Economic deprivation combined with a growing sense of marginalization and inequality across the city planted the seeds of political instability, outbreaking in 2011. Unlike other popular uprisings that occupy central squares, as in Egypt and Tunisia, the Syrian uprising in 2011 took place mainly within inner streets and neighborhood squares, mobilizing primarily on more or less upon the lines of stratification. This has emphasized the role of micro-urban and social settings in shaping mobilization and resistance tactics, which necessitates us to understand the way the city was stratified and place it at the center of the city-conflict nexus analysis. This research aims to understand to what extent pre-conflict urban stratification lines played a role in determining the different trajectories of three cities’ neighborhoods (Homs, Dara’a and Deir-ez-Zor). The main argument of the paper is that the way the Syrian city has been stratified creates various social groups within the city who have enjoyed different levels of accessibility to life chances, material resources and social statuses. This determines their relationship with other social groups in the city and, more importantly, their relationship with the state. The advent of a political opportunity will be depicted differently across the city’s different social groups according to their perceived interests and threats, which consequently leads to either political mobilization or demobilization. Several factors, including the type of social structures, built environment, and state response, determine the ability of social actors to transfer the repertoire of contention to collective action or transfer from social actors to political actors. The research uses urban stratification lines as the basis for understanding the different patterns of political upheavals in urban areas while explaining why neighborhoods with different social and urban environment settings had different abilities and capacities to mobilize, resist state repression and then descend into a military conflict. It particularly traces the transformation from social groups to social actors and political actors by applying the Explaining-outcome Process-Tracing method to depict the causal mechanisms that led to including or excluding different neighborhoods from each stage of the uprising, namely mobilization (M1), response (M2), and control (M3).

Keywords: urban stratification, syrian conflict, social movement, process tracing, divided city

Procedia PDF Downloads 73
468 Telomerase, a Biomarker in Oral Cancer Cell Proliferation and Tool for Its Prevention at Initial Stage

Authors: Shaista Suhail

Abstract:

As cancer populations is increasing sharply, the incidence of oral squamous cell carcinoma (OSCC) has also been expected to increase. Oral carcinogenesis is a highly complex, multistep process which involves accumulation of genetic alterations that lead to the induction of proteins promoting cell growth (encoded by oncogenes), increased enzymatic (telomerase) activity promoting cancer cell proliferation. The global increase in frequency and mortality, as well as the poor prognosis of oral squamous cell carcinoma, has intensified current research efforts in the field of prevention and early detection of this disease. The advances in the understanding of the molecular basis of oral cancer should help in the identification of new markers. The study of the carcinogenic process of the oral cancer, including continued analysis of new genetic alterations, along with their temporal sequencing during initiation, promotion and progression, will allow us to identify new diagnostic and prognostic factors, which will provide a promising basis for the application of more rational and efficient treatments. Telomerase activity has been readily found in most cancer biopsies, in premalignant lesions or germ cells. Activity of telomerase is generally absent in normal tissues. It is known to be induced upon immortalization or malignant transformation of human cells such as in oral cancer cells. Maintenance of telomeres plays an essential role during transformation of precancer to malignant stage. Mammalian telomeres, a specialized nucleoprotein structures are composed of large conctamers of the guanine-rich sequence 5_-TTAGGG-3_. The roles of telomeres in regulating both stability of genome and replicative immortality seem to contribute in essential ways in cancer initiation and progression. It is concluded that activity of telomerase can be used as a biomarker for diagnosis of malignant oral cancer and a target for inactivation in chemotherapy or gene therapy. Its expression will also prove to be an important diagnostic tool as well as a novel target for cancer therapy. The activation of telomerase may be an important step in tumorgenesis which can be controlled by inactivating its activity during chemotherapy. The expression and activity of telomerase are indispensable for cancer development. There are no drugs which can effect extremely to treat oral cancers. There is a general call for new emerging drugs or methods that are highly effective towards cancer treatment, possess low toxicity, and have a minor environment impact. Some novel natural products also offer opportunities for innovation in drug discovery. Natural compounds isolated from medicinal plants, as rich sources of novel anticancer drugs, have been of increasing interest with some enzyme (telomerase) blockage property. The alarming reports of cancer cases increase the awareness amongst the clinicians and researchers pertaining to investigate newer drug with low toxicity.

Keywords: oral carcinoma, telomere, telomerase, blockage

Procedia PDF Downloads 175
467 Activation of Apoptosis in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Exposed to Various Cadmium Concentration

Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska

Abstract:

The digestive system of insects is composed of three distinct regions: fore-, mid- and hingut. The middle region (the midgut) is treated as one of the barriers which protects the organism against any stressors which originate from external environment, e.g. toxic metals. Such factors can activate the cell death in epithelial cells to preserve the entire tissue/organs against the degeneration. Different mechanisms involved in homeostasis maintenance have been described, but the studies of animals under field conditions do not give the opportunity to conclude about potential ability of subsequent generation to inherit the tolerance mechanisms. It is possible only by a multigenerational strain of an animal led under laboratory conditions, exposed to a selected toxic factor, present also in polluted ecosystems. The main purpose of the project was to check if changes, which appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects with the special emphasis on apoptosis. As the animal for these studies we chose 5th larval stage of the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), which is one of pest of many vegetable crops. Animals were divided into some experimental groups: K, Cd, KCd, Cd1, Cd2, Cd3. A control group (K) fed a standard diet, and was conducted for XX generations, a cadmium group (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for XXX generations. A reference Cd group (KCd) has been initiated: control insects were fed with Cd supplemented diet (44 mg Cd per kg of dry weight of food). Experimental groups Cd1, Cd2, Cd3 developed from the control one: 5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food. We were interested in the activation of apoptosis during following generations in all experimental groups. Therefore, during the 1st year of the experiment, the measurements were done for 6 generations in all experimental group. The intensity and the course of apoptosis have been examined using transmission electron microscope (TEM), confocal microscope and flow cytometry. During apoptosis the cell started to shrink, extracellular spaces appeared between digestive and neighboring cells, the nucleus achieved a lobular shape. Eventually, the apoptotic cells was discharged into the midgut lumen. A quantitative analysis revealed that the number of apoptotic cells depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the following 6 generations, we observed that the percentage of apoptotic cells in the midguts from cadmium-exposed groups decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. At the same time, it was still higher than the percentage of apoptotic cells in the same tissues of the insects from the control and multigenerational cadmium strain. The results of our studies suggest that changes caused by cadmium treatment were preserved during 6-generational development of lepidopteran larvae. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.

Keywords: cadmium, cell death, digestive system, ultrastructure

Procedia PDF Downloads 215
466 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach

Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft

Abstract:

Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.

Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology

Procedia PDF Downloads 108
465 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes

Authors: Iris Vural Gursel, Andrea Ramirez

Abstract:

Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.

Keywords: biorefinery, economic assessment, lignin conversion, process design

Procedia PDF Downloads 262
464 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers

Authors: M. Sarraf, J. E. Moros, M. C. Sánchez

Abstract:

Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.

Keywords: basil seed gum, particle size, viscoelastic properties, whey protein

Procedia PDF Downloads 66
463 The Effect of Post Spinal Hypotension on Cerebral Oxygenation Using Near-Infrared Spectroscopy and Neonatal Outcomes in Full Term Parturient Undergoing Lower Segment Caesarean Section: A Prospective Observational Study

Authors: Shailendra Kumar, Lokesh Kashyap, Puneet Khanna, Nishant Patel, Rakesh Kumar, Arshad Ayub, Kelika Prakash, Yudhyavir Singh, Krithikabrindha V.

Abstract:

Introduction: Spinal anesthesia is considered a standard anesthesia technique for caesarean delivery. The incidence of spinal hypotension during caesarean delivery is 70 -80%. Spinal hypotension may cause cerebral hypoperfusion in the mother, but physiologically cerebral autoregulatory mechanisms accordingly prevent cerebral hypoxia. Cerebral blood flow remains constant in the 50-150 mmHg of Cerebral Perfusion Pressure (CPP) range. Near-infrared spectroscopy (NIRS) is a non-invasive technology that is used to detect Cerebral Desaturation Events (CDEs) immediately compared to other conventional intraoperative monitoring techniques. Objective: The primary aim of the study is to correlate the change in cerebral oxygen saturation using NIRS with respect to a fall in mean blood pressure after spinal anaesthesia and to find out the effects of spinal hypotension on neonatal APGAR score, neonatal acid-base variations, and presence of Postoperative Delirium (POD). Methodology: NIRS sensors were attached to the forehead of all the patients, and their baseline readings of cerebral oxygenation on the right and left frontal regions and mean blood pressure were noted. Subarachnoid block was given with hyperbaric 0.5% bupivacaine plus fentanyl, the dose being determined by the individual anaesthesiologist. Co-loading of IV crystalloid solutions was given to the patient. Blood pressure reading and cerebral saturation were recorded every 1 minute till 30min. Hypotension was a fall in MAP less than 20% of the baseline values. Patients going for hypotension were treated with an IV Bolus of phenylephrine/ephedrine. Umbilical cord blood samples were taken for blood gas analysis, and neonatal APGAR was noted by a neonatologist. Study design: A prospective observational study conducted in a population of Thirty ASA 2 and 3 parturients scheduled for lower segment caesarean section (LSCS). Results: Mean fall in regional cerebral saturation is 28.48 ± 14.7% with respect to the mean fall in blood pressure 38.92 ± 8.44 mm Hg. The correlation coefficient between fall in saturation and fall in mean blood pressure is 0.057, and p-value {0.7} after subarachnoid block. A fall in regional cerebral saturation occurred 2±1 min before a fall in mean blood pressure. Twenty-nine out of thirty patients required vasopressors during hypotension. The first dose of vasopressor requirement is needed at 6.02±2 min after the block. The mean APGAR score was 7.86 and 9.74 at 1 and 5 min of birth, respectively, and the mean umbilical arterial pH of 7.3±0.1. According to DRS-98 (Delirium Rating Scale), the mean delirium rating score on postoperative day 1 and day 2 were 0.1 and 0.7, respectively. Discussion: There was a fall in regional cerebral oxygen saturation, which started before with respect to a significant fall in mean blood pressure readings but was statistically not significant. Maximal fall in blood pressure requiring vasopressors occurs within 10 min of SAB. Neonatal APGAR scores and acid-base variations were in the normal range with maternal hypotension, and there was no incidence of postoperative delirium in patients with post-spinal hypotension.

Keywords: cerebral oxygenation, LSCS, NIRS, spinal hypotension

Procedia PDF Downloads 69
462 Supply Chain Improvement of the Halal Goat Industry in the Autonomous Region in Muslim Mindanao

Authors: Josephine R. Migalbin

Abstract:

Halal is an Arabic word meaning "lawful" or "permitted". When it comes to food and consumables, Halal is the dietary standard of Muslims. The Autonomous Region in Muslim Mindanao (ARMM) has a comparative advantage when it comes to Halal Industry because it is the only Muslim region in the Philippines and the natural starting point for the establishment of a halal industry in the country. The region has identified goat production not only for domestic consumption but for export market. Goat production is one of its strengths due to cultural compatibility. There is a high demand for goats during Ramadhan and Eid ul-Adha. The study aimed to provide an overview of the ARMM Halal Goat Industry; to map out the specific supply chain of halal goat, and to analyze the performance of the halal goat supply chain in terms of efficiency, flexibility, and overall responsiveness. It also aimed to identify areas for improvement in the supply chain such as behavioural, institutional, and process to provide recommendations for improvement in the supply chain towards efficient and effective production and marketing of halal goats, subsequently improving the plight of the actors in the supply chain. Generally, the raising of goats is characterized by backyard production (92.02%). There are four interrelated factors affecting significantly the production of goats which are breeding prolificacy, prevalence of diseases, feed abundance and pre-weaning mortality rate. The institutional buyers are mostly traders, restaurants/eateries, supermarkets, and meat shops, among others. The municipalities of Midsayap and Pikit in another region and Parang are the major goat sources and the municipalities in ARMM among others. In addition to the major supply centers, Siquijor, an island province in the Visayas is becoming a key source of goats. Goats are usually gathered by traders/middlemen and brought to the public markets. Meat vendors purchase them directly from raisers, slaughtered and sold fresh in wet markets. It was observed that there is increased demand at 2%/year and that supply is not enough to meet the demand. Farm gate price is 2.04 USD to 2.11 USD/kg liveweight. Industry information is shared by three key participants - raisers, traders and buyers. All respondents reported that information is through personal built-upon past experiences and that there is no full disclosure of information among the key participants in the chain. The information flow in the industry is fragmented in nature such that no total industry picture exists. In the last five years, numerous local and foreign agencies had undertaken several initiatives for the development of the halal goat industry in ARMM. The major issues include productivity which is the greatest challenge, difficulties in accessing technical support channels and lack of market linkage and consolidation. To address the various issues and concerns of the various industry players, there is a need to intensify appropriate technology transfer through extension activities, improve marketing channels by grouping producers, strengthen veterinary services and provide capital windows to improve facilities and reduce logistics and transaction costs in the entire supply chain.

Keywords: autonomous region in Muslim Mindanao, halal, halal goat industry, supply chain improvement

Procedia PDF Downloads 335
461 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 223