Search results for: food frequency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7403

Search results for: food frequency

353 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 229
352 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 123
351 Quantitative Comparisons of Different Approaches for Rotor Identification

Authors: Elizabeth M. Annoni, Elena G. Tolkacheva

Abstract:

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.

Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors

Procedia PDF Downloads 321
350 Radiofrequency and Near-Infrared Responsive Core-Shell Multifunctional Nanostructures Using Lipid Templates for Cancer Theranostics

Authors: Animesh Pan, Geoffrey D. Bothun

Abstract:

With the development of nanotechnology, research in multifunctional delivery systems has a new pace and dimension. An incipient challenge is to design an all-in-one delivery system that can be used for multiple purposes, including tumor targeting therapy, radio-frequency (RF-), near-infrared (NIR-), light-, or pH-induced controlled release, photothermal therapy (PTT), photodynamic therapy (PDT), and medical diagnosis. In this regard, various inorganic nanoparticles (NPs) are known to show great potential as the 'functional components' because of their fascinating and tunable physicochemical properties and the possibility of multiple theranostic modalities from individual NPs. Magnetic, luminescent, and plasmonic properties are the three most extensively studied and, more importantly biomedically exploitable properties of inorganic NPs. Although successful attempts of combining any two of them above mentioned functionalities have been made, integrating them in one system has remained challenge. Keeping those in mind, controlled designs of complex colloidal nanoparticle system are one of the most significant challenges in nanoscience and nanotechnology. Therefore, systematic and planned studies providing better revelation are demanded. We report a multifunctional delivery platform-based liposome loaded with drug, iron-oxide magnetic nanoparticles (MNPs), and a gold shell on the surface of liposomes, were synthesized using a lipid with polyelectrolyte (layersomes) templating technique. MNPs and the anti-cancer drug doxorubicin (DOX) were co-encapsulated inside liposomes composed by zwitterionic phophatidylcholine and anionic phosphatidylglycerol using reverse phase evaporation (REV) method. The liposomes were coated with positively charge polyelectrolyte (poly-L-lysine) to enrich the interface with gold anion, exposed to a reducing agent to form a gold nanoshell, and then capped with thio-terminated polyethylene glycol (SH-PEG2000). The core-shell nanostructures were characterized by different techniques like; UV-Vis/NIR scanning spectrophotometer, dynamic light scattering (DLS), transmission electron microscope (TEM). This multifunctional system achieves a variety of functions, such as radiofrequency (RF)-triggered release, chemo-hyperthermia, and NIR laser-triggered for photothermal therapy. Herein, we highlight some of the remaining major design challenges in combination with preliminary studies assessing therapeutic objectives. We demonstrate an efficient loading and delivery system to significant cell death of human cancer cells (A549) with therapeutic capabilities. Coupled with RF and NIR excitation to the doxorubicin-loaded core-shell nanostructure helped in securing targeted and controlled drug release to the cancer cells. The present core-shell multifunctional system with their multimodal imaging and therapeutic capabilities would be eminent candidates for cancer theranostics.

Keywords: cancer thernostics, multifunctional nanostructure, photothermal therapy, radiofrequency targeting

Procedia PDF Downloads 123
349 An Interoperability Concept for Detect and Avoid and Collision Avoidance Systems: Results from a Human-In-The-Loop Simulation

Authors: Robert Rorie, Lisa Fern

Abstract:

The integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) poses a variety of technical challenges to UAS developers and aviation regulators. In response to growing demand for access to civil airspace in the United States, the Federal Aviation Administration (FAA) has produced a roadmap identifying key areas requiring further research and development. One such technical challenge is the development of a ‘detect and avoid’ system (DAA; previously referred to as ‘sense and avoid’) to replace the ‘see and avoid’ requirement in manned aviation. The purpose of the DAA system is to support the pilot, situated at a ground control station (GCS) rather than in the cockpit of the aircraft, in maintaining ‘well clear’ of nearby aircraft through the use of GCS displays and alerts. In addition to its primary function of aiding the pilot in maintaining well clear, the DAA system must also safely interoperate with existing NAS systems and operations, such as the airspace management procedures of air traffic controllers (ATC) and collision avoidance (CA) systems currently in use by manned aircraft, namely the Traffic alert and Collision Avoidance System (TCAS) II. It is anticipated that many UAS architectures will integrate both a DAA system and a TCAS II. It is therefore necessary to explicitly study the integration of DAA and TCAS II alerting structures and maneuver guidance formats to ensure that pilots understand the appropriate type and urgency of their response to the various alerts. This paper presents a concept of interoperability for the two systems. The concept was developed with the goal of avoiding any negative impact on the performance level of TCAS II (understanding that TCAS II must largely be left as-is) while retaining a DAA system that still effectively enables pilots to maintain well clear, and, as a result, successfully reduces the frequency of collision hazards. The interoperability concept described in the paper focuses primarily on facilitating the transition from a late-stage DAA encounter (where a loss of well clear is imminent) to a TCAS II corrective Resolution Advisory (RA), which requires pilot compliance with the directive RA guidance (e.g., climb, descend) within five seconds of its issuance. The interoperability concept was presented to 10 participants (6 active UAS pilots and 4 active commercial pilots) in a medium-fidelity, human-in-the-loop simulation designed to stress different aspects of the DAA and TCAS II systems. Pilot response times, compliance rates and subjective assessments were recorded. Results indicated that pilots exhibited comprehension of, and appropriate prioritization within, the DAA-TCAS II combined alert structure. Pilots demonstrated a high rate of compliance with TCAS II RAs and were also seen to respond to corrective RAs within the five second requirement established for manned aircraft. The DAA system presented under test was also shown to be effective in supporting pilots’ ability to maintain well clear in the overwhelming majority of cases in which pilots had sufficient time to respond. The paper ends with a discussion of next steps for research on integrating UAS into civil airspace.

Keywords: detect and avoid, interoperability, traffic alert and collision avoidance system (TCAS II), unmanned aircraft systems

Procedia PDF Downloads 267
348 Determinants of Domestic Violence among Married Women Aged 15-49 Years in Sierra Leone by an Intimate Partner: A Cross-Sectional Study

Authors: Tesfaldet Mekonnen Estifanos, Chen Hui, Afewerki Weldezgi

Abstract:

Background: Intimate partner violence (hereafter IPV) is a major global public health challenge that tortures and disables women in the place where they are ought to be most secure within their own families. The fact that the family unit is commonly viewed as a private circle, violent acts towards women remains undermined. There are limited research and knowledge about the influencing factors linked to IPV in Sierra Leone. This study, therefore, estimates the prevalence rate and the predicting factors associated with IPV. Methods: Data were taken from Sierra-Leone Demographic and Health Survey (SDHS, 2013): the first in its form to incorporate information on domestic violence. Multistage cluster sampling research design was used, and information was gathered by a standard questionnaire. A total of 5185 respondents selected were interviewed, out of whom 870 were never been in union, thus excluded. To analyze the two dependent variables: experience of IPV, ‘ever’ and 'last 12 months prior to the survey', a total of 4315 (currently or formerly married) and 4029 women (currently in union) were included respectively. These dependent variables were constructed from the three forms of violence namely physical, emotional and sexual. Data analysis was applied using SPSS version 23, comprising three-step process. First, descriptive statistics were used to show the frequency distribution of both the outcome and explanatory variables. Second, bivariate analysis adopting chi-square test was applied to assess the individual relationship between the outcome and explanatory variables. Third, multivariate logistic regression analysis was undertaken using hierarchical modeling strategy to identify the influence of the explanatory variables on the outcome variables. Odds ratio (OR) and 95% confidence interval (CI) were utilized to examine the association of the variables considering p-values less than 0.05 statistically significant. Results: The prevalence of lifetime IPV among ever married women was 48.4%, while 39.8% of those currently married experienced IPV in the previous year preceding the survey. Women having 1 to 4 and more than 5 number of ever born babies were almost certain to encounter lifetime IPV. However, women who own a property, and those who referenced 3-5 reasons for which wife-beating is acceptable were less probably to experience lifetime IPV. Attesting parental violence, partner’s dominant marital behavior, and women afraid of their partner were the variables related to both experience of IPV ‘ever’ and ‘the previous year prior to the survey’. Respondents who concur that wife-beating is sensible in certain situations and occupations under the professional category had diminished chances of revealing IPV in the year prior to the data collection. Conclusion: This study indicated that factors significantly correlated with IPV in Sierra-Leone are mostly linked with husband related factors specifically, marital controlling behaviors. Addressing IPV in Sierra-Leone requires joint efforts that target men raise awareness to address controlling behavior and empower security in affiliations.

Keywords: husband behavior, married women, partner violence, Sierra Leone

Procedia PDF Downloads 129
347 A User-Side Analysis of the Public-Private Partnership: The Case of the New Bundang Subway Line in South Korea

Authors: Saiful Islam, Deuk Jong Bae

Abstract:

The purpose of this study is to examine citizen satisfaction and competitiveness of a Public Private Partnership project. The study focuses on PPP in the transport sector and investigates the New Bundang Subway Line (NBL) in South Korea as the object of a case study. Most PPP studies are dominated by the study of public and private sector interests, which are classified in to three major areas comprising of policy, finance, and management. This study will explore the user perspective by assessing customer satisfaction upon NBL cost and service quality, also the competitiveness of NBL compared to other alternative transport modes which serve the Jeongja – Gangnam trip or vice versa. The regular Bundang Subway Line, New Bundang Subway Line, bus and private vehicle are selected as the alternative transport modes. The study analysed customer satisfaction of NBL and citizen’s preference of alternative transport modes based on a survey in Bundang district, South Korea. Respondents were residents and employees who live or work in Bundang city, and were divided into the following areas Pangyo, Jeongjae – Sunae, Migeun – Ori – Jukjeon, and Imae – Yatap – Songnam. The survey was conducted in January 2015 for two weeks, and 753 responses were gathered. By applying the Hedonic Utility approach, the factors which affect the frequency of using NBL were found to be overall customer satisfaction, convenience of access, and the socio economic demographic of the individual. In addition, by applying the Analytic Hierarchy Process (AHP) method, criteria factors influencing the decision to select alternative transport modes were identified. Those factors, along with the author judgement of alternative transport modes, and their associated criteria and sub-criteria produced a priority list of user preferences regarding their alternative transport mode options. The study found that overall the regular Bundang Subway Line (BL), which was built and operated under a conventional procurement method was selected as the most preferable transport mode due to its cost competitiveness. However, on the sub-criteria level analysis, the NBL has competitiveness on service quality, particularly on journey time. By conducting a sensitivity analysis, the NBL can become the first choice of transport by increasing the NBL’s degree of weight associated with cost by 0,05. This means the NBL would need to reduce either it’s fare cost or transfer fee, or combine those two cost components to reduce the total of the current cost by 25%. In addition, the competitiveness of NBL also could be obtained by increasing NBL convenience through escalating access convenience such as constructing an additional station or providing more access modes. Although these convenience improvements would require a few extra minutes of journey time, the user found this to be acceptable. The findings and policy suggestions can contribute to the next phase of NBL development, showing that consideration should be given to the citizen’s voice. The case study results also contribute to the literature of PPP projects specifically from a user side perspective.

Keywords: public private partnership, customer satisfaction, public transport, new Bundang subway line

Procedia PDF Downloads 348
346 Consumers Attitude toward the Latest Trends in Decreasing Energy Consumption of Washing Machine

Authors: Farnaz Alborzi, Angelika Schmitz, Rainer Stamminger

Abstract:

Reducing water temperatures in the wash phase of a washing programme and increasing the overall cycle durations are the latest trends in decreasing energy consumption of washing programmes. Since the implementation of the new energy efficiency classes in 2010, manufacturers seem to apply the aforementioned washing strategy with lower temperatures combined with longer programme durations extensively to realise energy-savings needed to meet the requirements of the highest energy efficiency class possible. A semi-representative on-line survey in eleven European countries (Czech Republic, Finland, France, Germany, Hungary, Italy, Poland, Romania, Spain, Sweden and the United Kingdom) was conducted by Bonn University in 2015 to shed light on consumer opinion and behaviour regarding the effects of the lower washing temperature and longer cycle duration in laundry washing on consumers’ acceptance of the programme. The risk of the long wash cycle is that consumers might not use the energy efficient Standard programmes and will think of this option as inconvenient and therefore switch to shorter, but more energy consuming programmes. Furthermore, washing in a lower temperature may lead to the problem of cross-contamination. Washing behaviour of over 5,000 households was studied in this survey to provide support and guidance for manufacturers and policy designers. Qualified households were chosen following a predefined quota: -Involvement in laundry washing: substantial, -Distribution of gender: more than 50 % female , -Selected age groups: -20–39 years, -40–59 years, -60–74 years, -Household size: 1, 2, 3, 4 and more than 4 people. Furthermore, Eurostat data for each country were used to calculate the population distribution in the respective age class and household size as quotas for the consumer survey distribution in each country. Before starting the analyses, the validity of each dataset was controlled with the aid of control questions. After excluding the outlier data, the number of the panel diminished from 5,100 to 4,843. The primary outcome of the study is European consumers are willing to save water and energy in a laundry washing but reluctant to use long programme cycles since they don’t believe that the long cycles could be energy-saving. However, the results of our survey don’t confirm that there is a relation between frequency of using Standard cotton (Eco) or Energy-saving programmes and the duration of the programmes. It might be explained by the fact that the majority of washing programmes used by consumers do not take so long, perhaps consumers just choose some additional time reduction option when selecting those programmes and this finding might be changed if the Energy-saving programmes take longer. Therefore, it may be assumed that introducing the programme duration as a new measure on a revised energy label would strongly influence the consumer at the point of sale. Furthermore, results of the survey confirm that consumers are more willing to use lower temperature programmes in order to save energy than accepting longer programme cycles and majority of them accept deviation from the nominal temperature of the programme as long as the results are good.

Keywords: duration, energy-saving, standard programmes, washing temperature

Procedia PDF Downloads 220
345 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia

Authors: Merih Welay Welesilassie

Abstract:

Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.

Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray

Procedia PDF Downloads 59
344 Spatial Variability of Soil Metal Contamination to Detect Cancer Risk Zones in Coimbatore Region of India

Authors: Aarthi Mariappan, Janani Selvaraj, P. B. Harathi, M. Prashanthi Devi

Abstract:

Anthropogenic modification of the urban environment has largely increased in the recent years in order to sustain the growing human population. Intense industrial activity, permanent and high traffic on the roads, a developed subterranean infrastructure network, land use patterns are just some specific characteristics. Every day, the urban environment is polluted by more or less toxic emissions, organic or metals wastes discharged from specific activities such as industrial, commercial, municipal. When these eventually deposit into the soil, the physical and chemical properties of the surrounding soil is changed, transforming it into a human exposure indicator. Metals are non-degradable and occur cumulative in soil due to regular deposits are a result of permanent human activity. Due to this, metals are a contaminant factor for soil when persistent over a long period of time and a possible danger for inhabitant’s health on prolonged exposure. Metals accumulated in contaminated soil may be transferred to humans directly, by inhaling the dust raised from top soil, or by ingesting, or by dermal contact and indirectly, through plants and animals grown on contaminated soil and used for food. Some metals, like Cu, Mn, Zn, are beneficial for human’s health and represent a danger only if their concentration is above permissible levels, but other metals, like Pb, As, Cd, Hg, are toxic even at trace level causing gastrointestinal and lung cancers. In urban areas, metals can be emitted from a wide variety of sources like industrial, residential, commercial activities. Our study interrogates the spatial distribution of heavy metals in soil in relation to their permissible levels and their association with the health risk to the urban population in Coimbatore, India. Coimbatore region is a high cancer risk zone and case records of gastro intestinal and respiratory cancer patients were collected from hospitals and geocoded in ArcGIS10.1. The data of patients pertaining to the urban limits were retained and checked for their diseases history based on their diagnosis and treatment. A disease map of cancer was prepared to show the disease distribution. It has been observed that in our study area Cr, Pb, As, Fe and Mg exceeded their permissible levels in the soil. Using spatial overlay analysis a relationship between environmental exposure to these potentially toxic elements in soil and cancer distribution in Coimbatore district was established to show areas of cancer risk. Through this, our study throws light on the impact of prolonged exposure to soil contamination in soil in the urban zones, thereby exploring the possibility to detect cancer risk zones and to create awareness among the exposed groups on cancer risk.

Keywords: soil contamination, cancer risk, spatial analysis, India

Procedia PDF Downloads 400
343 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode

Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel

Abstract:

In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.

Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode

Procedia PDF Downloads 191
342 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 275
341 Changing the Landscape of Fungal Genomics: New Trends

Authors: Igor V. Grigoriev

Abstract:

Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.

Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics

Procedia PDF Downloads 202
340 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 73
339 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets

Authors: Yih-Wenn Laih

Abstract:

In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's  and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.

Keywords: co-movement, depreciation of Yen, rank correlation, stock market

Procedia PDF Downloads 228
338 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 137
337 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 62
336 Global Digital Peer-to-Peer (P2P) Lending Platform Empowering Rural India: Determinants of Funding

Authors: Ankur Mehra, M. V. Shivaani

Abstract:

With increasing digitization, the world is coming closer, not only in terms of informational flow but also in terms of capital flows. And micro-finance institutions (MFIs) have perfectly leveraged this digital world by resorting to the innovative digital social peer-to-peer (P2P) lending platforms, such as, Kiva. These digital P2P platforms bring together micro-borrowers and lenders from across the world. The main objective of this study is to understand the funding preferences of social investors primarily from developed countries (such as US, UK, Australia), lending money to borrowers from rural India at zero interest rates through Kiva. Further, the objective of this study is to increase awareness about such a platform among various MFIs engaged in providing micro-loans to those in need. The sample comprises of India based micro-loan applications posted by various MFIs on Kiva lending platform over the period Sept 2012-March 2016. Out of 7,359 loans, 256 loans failed to get funded by social investors. On an average a micro-loan with 30 days to expiry gets fully funded in 7,593 minutes or 5.27 days. 62% of the loans raised on Kiva are related to livelihood, 32.5% of the loans are for funding basic necessities and balance 5.5% loans are for funding education. 47% of the loan applications have more than one borrower; while, currency exchange risk is on the social lenders for 45% of the loans. Controlling for the loan amount and loan tenure, the analyses suggest that those loan applications where the number of borrowers is more than one have a lower chance of getting funded as compared to the loan applications made by a sole borrower. Such group applications also take more time to get funded. Further, loan application by a solo woman not only has a higher chance of getting funded but as such get funded faster. The results also suggest that those loan applications which are supported by an MFI that has a religious affiliation, not only have a lower chance of getting funded, but also take longer to get funded as compared to the loan applications posted by secular MFIs. The results do not support cross-border currency risk to be a factor in explaining the determinants of loan funding. Finally, analyses suggest that loans raised for the purpose of earning livelihood and education have a higher chance of getting funded and such loans get funded faster as compared to the loans applied for purposes related to basic necessities such a clothing, housing, food, health, and personal use. The results are robust to controls for ‘MFI dummy’ and ‘year dummy’. The key implication from this study is that global social investors tend to develop an emotional connect with single woman borrowers and consequently they get funded faster Hence, MFIs should look for alternative ways for funding loans whose purpose is to meet basic needs; while, more loans related to livelihood and education should be raised via digital platforms.

Keywords: P2P lending, social investing, fintech, financial inclusion

Procedia PDF Downloads 137
335 Maritime English Communication Training for Japanese VTS Operators in the Congested Area Including the Narrow Channel of Akashi Strait

Authors: Kenji Tanaka, Kazumi Sugita, Yuto Mizushima

Abstract:

This paper introduces a noteworthy form of English communication training for the officers and operators of the Osaka-Bay Marine Traffic Information Service (Osaka MARTIS) of the Japan Coast Guard working in the congested area at the Akashi Strait in Hyogo Prefecture, Japan. The authors of this paper, Marine Technical College’s (MTC) English language instructors, have been holding about forty lectures and exercises in basic and normal Maritime English (ME) for several groups of MARTIS personnel at Osaka MARTIS annually since they started the training in 2005. Trainees are expected to be qualified Maritime Third-Class Radio Operators who are responsible for providing safety information to a daily average of seven to eight hundred vessels that pass through the Akashi Strait, one of Japan’s narrowest channels. As of 2022, the instructors are conducting 55 remote lessons at MARTIS. One lesson is 90 minutes long. All 26 trainees are given oral and written assessments. The trainees need to pass the examination to become qualified operators every year, requiring them to train and maintain their linguistic levels even during the pandemic of Corona Virus Disease-19 (COVID-19). The vessel traffic information provided by Osaka MARTIS in Maritime English language is essential to the work involving the use of very high frequency (VHF) communication between MARTIS and vessels in the area. ME is the common language mainly used on board merchant, fishing, and recreational vessels, normally at sea. ME was edited and recommended by the International Maritime Organization in the 1970s, was revised in 2002, and has undergone continual revision. The vessel’s circumstances are much more serious at the strait than those at the open sea, so these vessels need ME to receive guidance from the center when passing through the narrow strait. The imminent and challenging situations at the strait necessitate that textbooks’ contents include the basics of the phrase book for seafarers as well as specific and additional navigational information, pronunciation exercises, notes on keywords and phrases, explanations about collocations, sample sentences, and explanations about the differences between synonyms especially those focusing on terminologies necessary for passing through the strait. Additionally, short Japanese-English translation quizzes about these topics, as well as prescribed readings about the maritime sector, are include in the textbook. All of these exercises have been trained in the remote education system since the outbreak of COVID-19. According to the guidelines of ME edited in 2009, the lowest level necessary for seafarers is B1 (lower individual users) of The Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR). Therefore, this vocational ME language training at Osaka MARTIS aims for its trainees to communicate at levels higher than B1. A noteworthy proof of improvement from this training is that most of the trainees have become qualified marine radio communication officers.

Keywords: akashi strait, B1 of CEFR, maritime english communication training, osaka martis

Procedia PDF Downloads 120
334 Effect of Phenolic Acids on Human Saliva: Evaluation by Diffusion and Precipitation Assays on Cellulose Membranes

Authors: E. Obreque-Slier, F. Orellana-Rodríguez, R. López-Solís

Abstract:

Phenolic compounds are secondary metabolites present in some foods, such as wine. Polyphenols comprise two main groups: flavonoids (anthocyanins, flavanols, and flavonols) and non-flavonoids (stilbenes and phenolic acids). Phenolic acids are low molecular weight non flavonoid compounds that are usually grouped into benzoic (gallic, vanillinic and protocatechuic acids) and cinnamic acids (ferulic, p-coumaric and caffeic acids). Likewise, tannic acid is an important polyphenol constituted mainly by gallic acid. Phenolic compounds are responsible for important properties in foods and drinks, such as color, aroma, bitterness, and astringency. Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. Astringency perception has been associated with interactions between flavanols present in some foods and salivary proteins. Despite the quantitative relevance of phenolic acids in food and beverages, there is no information about its effect on salivary proteins and consequently on the sensation of astringency. The objective of this study was assessed the interaction of several phenolic acids (gallic, vanillinic, protocatechuic, ferulic, p-coumaric and caffeic acids) with saliva. Tannic acid was used as control. Thus, solutions of each phenolic acids (5 mg/mL) were mixed with human saliva (1:1 v/v). After incubation for 5 min at room temperature, 15-μL aliquots of the mixtures were dotted on a cellulose membrane and allowed to diffuse. The dry membrane was fixed in 50 g/L trichloroacetic acid, rinsed in 800 mL/L ethanol and stained for protein with Coomassie blue for 20 min, destained with several rinses of 73 g/L acetic acid and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged and fifteen-μL aliquots of each supernatant were dotted on a cellulose membrane, allowed to diffuse and processed for protein staining, as indicated above. In this latter assay, reduced protein staining was taken as indicative of protein precipitation (precipitation test). The diffusion of the salivary protein was restricted by the presence of each phenolic acids (anti-diffusive effect), while tannic acid did not alter diffusion of the salivary protein. By contrast, phenolic acids did not provoke precipitation of the salivary protein, while tannic acid produced precipitation of salivary proteins. In addition, binary mixtures (mixtures of two components) of various phenolic acids with gallic acid provoked a restriction of saliva. Similar effect was observed by the corresponding individual phenolic acids. Contrary, binary mixtures of phenolic acid with tannic acid, as well tannic acid alone, did not affect the diffusion of the saliva but they provoked an evident precipitation. In summary, phenolic acids showed a relevant interaction with the salivary proteins, thus suggesting that these wine compounds can also contribute to the sensation of astringency.

Keywords: astringency, polyphenols, tannins, tannin-protein interaction

Procedia PDF Downloads 241
333 The Technique of Mobilization of the Colon for Pull-Through Procedure in Hirschsprung's Disease

Authors: Medet K. Khamitov, Marat M. Ospanov, Vasiliy M. Lozovoy, Zhenis N. Sakuov, Dastan Z. Rustemov

Abstract:

With a high rectosigmoid transitional zone in children with Hirschsprung’s disease, the upper rectal, sigmoid, left colon arteries are ligated during the pull-through of the descending part of the colon. As a result, the inferior mesenteric artery ceases to participate in the blood supply to the descending part of the colon. As a result, the reduced colon is supplied with blood only by the middle colon artery, which originates from the superior mesenteric artery. Insufficiency of blood supply to the reduced colon is the cause of the development of chronic hypoxia of the intestinal wall or necrosis of the reduced descending colon. Some surgeons prefer to preserve the left colon artery. However, it is possible to stretch the mesentery, which can lead to bowel retraction to anastomotic leaks and stenosis. Chronic hypoxia of the reduced colon, in turn, is the cause of acquired (secondary) aganglionosis. The highest frequency of anastomotic leaks is observed in children older than five years. The purpose is to reduce the risk of complications in the pull-through procedure of the descending part of the colon in patients with Hirschsprung’s disease by ensuring its sufficient mobility and maintaining blood supply to the lower mesenteric artery. Methodology and events. Two children aged 5 and 7 years with Hirschsprung’s disease were operated under the conditions of the hospital in Nur-Sultan. The diagnosis was made using x-ray contrast enema and histological examination. Operational technique. After revision of the left part of the colon and assessment of the architectonics of its blood vessels, parietal mobilization of the affected sigmoid and rectum was performed on laparotomy access, while maintaining the arterial and venous terminal arcades of the sigmoid vessels. Then, the descending branch of the left colon artery was crossed (if there is an insufficient length of the reduced intestine, the left colonic artery itself may also be crossed). This manipulation provides additional mobility of the pull-through descending part of the colon. The resulting "windows" in the mesentery of the reduced intestine were sutured to prevent the development of an internal hernia. Formed a full-blooded, sufficiently long transplant from the transverse loops of the splenic angle and the descending parts of the colon with blood supply from the upper and lower mesenteric artery, freely, without tension, is reduced to the rectal zone with the coloanal anastomosis 1.5 cm above the dentate line. Results. The postoperative period was uneventful. Patients were discharged on the 7th day. The observation was carried out for six months. In no case, there was a bowel retraction, anastomotic leak, anastomotic stenosis, or other complications. Conclusion. The presented technique of mobilization of the colon for the pull-through procedure in a high transitional rectosigmoid zone of Hirschsprung’s disease allows to maintain normal blood supply to the distal part of the colon and to avoid the tension of the colon. The technique allows reducing the risk of anastomotic leak, bowel necrosis, chronic ischemia, to exclude colon retraction and anastomotic stenosis.

Keywords: blood supply, children, colon mobilization, Hirschsprung's disease, pull-through

Procedia PDF Downloads 145
332 Comparative Assessment of Heavy Metals Influence on Growth of Silver Catfish (Chrysichthys nigrodigitatus) and Tilapia Fish (Oreochromis niloticus) Collected from Brackish and Freshwater, South-West, Nigeria

Authors: Atilola O. Abidemi-Iromini, Oluayo A. Bello-Olusoji, Immanuel A. Adebayo

Abstract:

Ecological studies were carried out in Asejire Reservoir (AR) and Lagos Lagoon (LL), Southwest Nigeria from January 2012 to December 2013 to determine the health status of Chrysichthys nigrodigitatus (CN) and Oreochromis niloticus (ON). The fish species samples were collected every month, these were separated into sexes, and growth pattern {length, (cm); weight (g), Isometric index, condition factor} were measured. Heavy metals (lead (Pb), iron (Fe), zinc (Zn), copper (Cu), and chromium (Cr) in ppm concentrations were also determined while bacteria occurrence(s), (load and prevalence) on fish skins, gills and intestine in the two ecological zones were determined. The fish ratio collected is in range with normal aquatic (1: 1+) male: female ratio. Growth assessment determined revealed no significant difference in length and weight in O. niloticus between locations, but a significant difference in weight occurred in C. nigrodigitatus between locations, with a higher weight (196.06 ±0.16 g) from Lagos Lagoon. Highest condition factor (5.25) was recorded in Asejire Reservoir O. niloticus, (ARON); and lowest condition factor (1.64) was observed in Asejire Reservoir C. nigrodigitatus (ARCN); as this indicated a negative allometric value which is normal in Bagridae species because it increases more in Length to weight gain than for the Cichlidae growth status. Normal growth rate (K > 1) occurred between sexes, with the male species having higher K - factors than female species within locations, between locations, between species, and within species, except for female C. nigrodigitatus having higher condition factor (K = 1.75) than male C. nigrodigitatus (K = 1.54) in Asejire Reservoir. The highest isometric value (3.05) was recorded in Asejire Reservoir O. niloticus and lowest in Lagos Lagoon C. nigrodigitatus. Male O. niloticus from Asejire Reservoir had highest isometric value, and O. niloticus species had higher condition factor which ranged between isometric (b ≤ 3) and positive allometric (b > 3), hence, denoted robustness of fish to grow more in weight than in length; while C. nigrodigitatus fish has negative allometric (b < 3) indicating fish add more length than in weight on growth. The status of condition factors and isometric values obtained is species-specific, and environmental influence, food availability or reproduction factor may as well be contributing factors. The concentrations of heavy metals in fish flesh revealed that Zn (6.52 ±0.82) had the highest, while Cr (0.01±0.00) was ranked lowest; for O. niloticus in Asejire Reservoir. In Lagos Lagoon, heavy metals concentration level revealed that O. niloticus flesh had highest in Zn (4.71±0.25) and lowest in Pb (0.01±0.00). Lagos Lagoon C. nigrodigitatus heavy metal concentration level revealed Zn (9.56±0.96) had highest, while Cr (0.06±0.01) had lowest; and Asejire Reservoir C. nigrodigitatus heavy metal level revealed that Zn (8.26 ±0.74) had highest, and Cr (0.08±0.00) had lowest. In all, Zinc (Zn) was top-ranked in level among species.

Keywords: Oreochromis niloticus, growth status, Chrysichthys nigrodigitatus, environments, heavy metals

Procedia PDF Downloads 112
331 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 127
330 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 141
329 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac

Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim

Abstract:

An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.

Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD

Procedia PDF Downloads 274
328 Foodborne Outbreak Calendar: Application of Time Series Analysis

Authors: Ryan B. Simpson, Margaret A. Waskow, Aishwarya Venkat, Elena N. Naumova

Abstract:

The Centers for Disease Control and Prevention (CDC) estimate that 31 known foodborne pathogens cause 9.4 million cases of these illnesses annually in US. Over 90% of these illnesses are associated with exposure to Campylobacter, Cryptosporidium, Cyclospora, Listeria, Salmonella, Shigella, Shiga-Toxin Producing E.Coli (STEC), Vibrio, and Yersinia. Contaminated products contain parasites typically causing an intestinal illness manifested by diarrhea, stomach cramping, nausea, weight loss, fatigue and may result in deaths in fragile populations. Since 1998, the National Outbreak Reporting System (NORS) has allowed for routine collection of suspected and laboratory-confirmed cases of food poisoning. While retrospective analyses have revealed common pathogen-specific seasonal patterns, little is known concerning the stability of those patterns over time and whether they can be used for preventative forecasting. The objective of this study is to construct a calendar of foodborne outbreaks of nine infections based on the peak timing of outbreak incidence in the US from 1996 to 2017. Reported cases were abstracted from FoodNet for Salmonella (135115), Campylobacter (121099), Shigella (48520), Cryptosporidium (21701), STEC (18022), Yersinia (3602), Vibrio (3000), Listeria (2543), and Cyclospora (758). Monthly counts were compiled for each agent, seasonal peak timing and peak intensity were estimated, and the stability of seasonal peaks and synchronization of infections was examined. Negative Binomial harmonic regression models with the delta-method were applied to derive confidence intervals for the peak timing for each year and overall study period estimates. Preliminary results indicate that five infections continue to lead as major causes of outbreaks, exhibiting steady upward trends with annual increases in cases ranging from 2.71% (95%CI: [2.38, 3.05]) in Campylobacter, 4.78% (95%CI: [4.14, 5.41]) in Salmonella, 7.09% (95%CI: [6.38, 7.82]) in E.Coli, 7.71% (95%CI: [6.94, 8.49]) in Cryptosporidium, and 8.67% (95%CI: [7.55, 9.80]) in Vibrio. Strong synchronization of summer outbreaks were observed, caused by Campylobacter, Vibrio, E.Coli and Salmonella, peaking at 7.57 ± 0.33, 7.84 ± 0.47, 7.85 ± 0.37, and 7.82 ± 0.14 calendar months, respectively, with the serial cross-correlation ranging 0.81-0.88 (p < 0.001). Over 21 years, Listeria and Cryptosporidium peaks (8.43 ± 0.77 and 8.52 ± 0.45 months, respectively) have a tendency to arrive 1-2 weeks earlier, while Vibrio peaks (7.8 ± 0.47) delay by 2-3 weeks. These findings will be incorporated in the forecast models to predict common paths of the spread, long-term trends, and the synchronization of outbreaks across etiological agents. The predictive modeling of foodborne outbreaks should consider long-term changes in seasonal timing, spatiotemporal trends, and sources of contamination.

Keywords: foodborne outbreak, national outbreak reporting system, predictive modeling, seasonality

Procedia PDF Downloads 126
327 Surface Acoustic Waves Nebulisation of Liposomes Manufactured in situ for Pulmonary Drug Delivery

Authors: X. King, E. Nazarzadeh, J. Reboud, J. Cooper

Abstract:

Pulmonary diseases, such as asthma, are generally treated by the inhalation of aerosols that has the advantage of reducing the off-target (e.g., toxicity) effects associated with systemic delivery in blood. Effective respiratory drug delivery requires a droplet size distribution between 1 and 5 µm. Inhalation of aerosols with wide droplet size distribution, out of this range, results in deposition of drug in not-targeted area of the respiratory tract, introducing undesired side effects on the patient. In order to solely deliver the drug in the lower branches of the lungs and release it in a targeted manner, a control mechanism to produce the aerosolized droplets is required. To regulate the drug release and to facilitate the uptake from cells, drugs are often encapsulated into protective liposomes. However, a multistep process is required for their formation, often performed at the formulation step, therefore limiting the range of available drugs or their shelf life. Using surface acoustic waves (SAWs), a pulmonary drug delivery platform was produced, which enabled the formation of defined size aerosols and the formation of liposomes in situ. SAWs are mechanical waves, propagating along the surface of a piezoelectric substrate. They were generated using an interdigital transducer on lithium niobate with an excitation frequency of 9.6 MHz at a power of 1W. Disposable silicon superstrates were etched using photolithography and dry etch processes to create an array of cylindrical through-holes with different diameters and pitches. Superstrates were coupled with the SAW substrate through water-based gel. As the SAW propagates on the superstrate, it enables nebulisation of a lipid solution deposited onto it. The cylindrical cavities restricted the formation of large drops in the aerosol, while at the same time unilamellar liposomes were created. SAW formed liposomes showed a higher monodispersity compared to the control sample, as well as displayed, a faster production rate. To test the aerosol’s size, dynamic light scattering and laser diffraction methods were used, both showing the size control of the aerosolised particles. The use of silicon superstate with cavity size of 100-200 µm, produced an aerosol with a mean droplet size within the optimum range for pulmonary drug delivery, containing the liposomes in which the medicine could be loaded. Additionally, analysis of liposomes with Cryo-TEM showed formation of vesicles with narrow size distribution between 80-100 nm and optimal morphology in order to be used for drug delivery. Encapsulation of nucleic acids in liposomes through the developed SAW platform was also investigated. In vitro delivery of siRNA and DNA Luciferase were achieved using A549 cell line, lung carcinoma from human. In conclusion, SAW pulmonary drug delivery platform was engineered, in order to combine multiple time consuming steps (formation of liposomes, drug loading, nebulisation) into a unique platform with the aim of specifically delivering the medicament in a targeted area, reducing the drug’s side effects.

Keywords: acoustics, drug delivery, liposomes, surface acoustic waves

Procedia PDF Downloads 120
326 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 58
325 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 212
324 A Comparison of Biosorption of Radionuclides Tl-201 on Different Biosorbents and Their Empirical Modelling

Authors: Sinan Yapici, Hayrettin Eroglu

Abstract:

The discharge of the aqueous radionuclides wastes used for the diagnoses of diseases and treatments of patients in nuclear medicine can cause fatal health problems when the radionuclides and its stable daughter component mix with underground water. Tl-201, which is one of the radionuclides commonly used in the nuclear medicine, is a toxic substance and is converted to its stable daughter component Hg-201, which is also a poisonous heavy metal: Tl201 → Hg201 + Gamma Ray [135-167 Kev (12%)] + X Ray [69-83 Kev (88%)]; t1/2 = 73,1 h. The purpose of the present work was to remove Tl-201 radionuclides from aqueous solution by biosorption on the solid bio wastes of food and cosmetic industry as bio sorbents of prina from an olive oil plant, rose residue from a rose oil plant and tea residue from a tea plant, and to make a comparison of the biosorption efficiencies. The effects of the biosorption temperature, initial pH of the aqueous solution, bio sorbent dose, particle size and stirring speed on the biosorption yield were investigated in a batch process. It was observed that the biosorption is a rapid process with an equilibrium time less than 10 minutes for all the bio sorbents. The efficiencies were found to be close to each other and measured maximum efficiencies were 93,30 percent for rose residue, 94,1 for prina and 98,4 for tea residue. In a temperature range of 283 and 313 K, the adsorption decreased with increasing temperature almost in a similar way. In a pH range of 2-10, increasing pH enhanced biosorption efficiency up to pH=7 and then the efficiency remained constant in a similar path for all the biosorbents. Increasing stirring speed from 360 to 720 rpm enhanced slightly the biosorption efficiency almost at the same ratio for all bio sorbents. Increasing particle size decreased the efficiency for all biosorbent; however the most negatively effected biosorbent was prina with a decrease in biosorption efficiency from about 84 percent to 40 with an increase in the nominal particle size 0,181 mm to 1,05 while the least effected one, tea residue, went down from about 97 percent to 87,5. The biosorption efficiencies of all the bio sorbents increased with increasing biosorbent dose in the range of 1,5 to 15,0 g/L in a similar manner. The fit of the experimental results to the adsorption isotherms proved that the biosorption process for all the bio sorbents can be represented best by Freundlich model. The kinetic analysis showed that all the processes fit very well to pseudo second order rate model. The thermodynamics calculations gave ∆G values between -8636 J mol-1 and -5378 for tea residue, -5313 and -3343 for rose residue, and -5701 and -3642 for prina with a ∆H values of -39516 J mol-1, -23660 and -26190, and ∆S values of -108.8 J mol-1 K-1, -64,0, -72,0 respectively, showing spontaneous and exothermic character of the processes. An empirical biosorption model in the following form was derived for each biosorbent as function of the parameters and time, taking into account the form of kinetic model, with regression coefficients over 0.9990 where At is biosorbtion efficiency at any time and Ae is the equilibrium efficiency, t is adsorption period as s, ko a constant, pH the initial acidity of biosorption medium, w the stirring speed as s-1, S the biosorbent dose as g L-1, D the particle size as m, and a, b, c, and e are the powers of the parameters, respectively, E a constant containing activation energy and T the temperature as K.

Keywords: radiation, diosorption, thallium, empirical modelling

Procedia PDF Downloads 261