Search results for: combined heat
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5504

Search results for: combined heat

1214 Blue Whale Body Condition from Photographs Taken over a 14-Year Period in the North East Pacific: Annual Variations and Connection to Measures of Ocean Productivity

Authors: Rachel Wachtendonk, John Calambokidis, Kiirsten Flynn

Abstract:

Large marine mammals can serve as an indicator of the overall state of the environment due to their long lifespan and apex position in marine food webs. Reductions in prey, driven by changes in environmental conditions can have resounding impacts on the trophic system as a whole; this can manifest in reduced fat stores that are visible on large whales. Poor health can lead to reduced survivorship and fitness, both of which can be detrimental to a recovering population. A non-invasive technique was used for monitoring blue whale health and for seeing if it changes with ocean conditions. Digital photographs of blue whales taken in the NE Pacific by Cascadia Research and collaborators from 2005-2018 (n=3,545) were scored for overall body condition based on visible vertebrae and body shape on a scale of 0-3 where a score of 0 indicated best body condition and a score of 3 indicated poorest. The data was analyzed to determine if there were patterns in the health of whales across years and whether overall poor health was related to oceanographic conditions and predictors of prey abundance on the California coast. The year was a highly significant factor in body condition (Chi-Square, p<0.001). The proportion of whales showing poor body condition (scores 2 & 3) overall was 33% but by year varied widely from a low of 18% (2008) to a high of 55% (2015). The only two years where >50% of animals had poor body condition were 2015 and 2017 (no other year was above 45%). The 2015 maximum proportion of whales in poor body condition coincide with the marine heat wave that affected the NE Pacific 2014-16 and impacted other whale populations. This indicates that the scoring method was an effective way to evaluate blue whale health and how they respond to a changing ocean.

Keywords: blue whale, body condition, environmental variability, photo-identification

Procedia PDF Downloads 208
1213 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 143
1212 Community-Based Reference Interval of Selected Clinical Chemistry Parameters Among Apparently Healthy Adolescents in Mekelle City, Tigrai, Northern Ethiopia

Authors: Getachew Belay Kassahun

Abstract:

Background: Locally established clinical laboratory reference intervals (RIs) are required to interpret laboratory test results for screening, diagnosis, and prognosis. The objective of this study was to establish a reference interval of clinical chemistry parameters among apparently healthy adolescents aged between 12 and 17 years in Mekelle, Tigrai, in the northern part of Ethiopia. Methods: Community-based cross-sectional study was employed from December 2018 to March 2019 in Mekelle City among 172 males and 172 females based on a Multi-stage sampling technique. Blood samples were tested for Fasting blood sugar (FBS), alanine amino transferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), Creatinine, urea, total protein, albumin (ALB), direct and indirect bilirubin (BIL.D and BIL.T) using 25 Bio system clinical chemistry analyzer. Results were analyzed using SPSS version 23 software and based on the Clinical Laboratory Standard Institute (CLSI)/ International Federation of Clinical Chemistry (IFCC) C 28-A3 Guideline which defines the reference interval as the 95% central range of 2.5th and 97.5th percentiles. Mann Whitney U test, descriptive statistics and box and whisker were statistical tools used for analysis. Results: This study observed statistically significant differences between males and females in ALP, ALT, AST, Urea and Creatinine Reference intervals. The established reference intervals for males and females, respectively, were: ALP (U/L) 79.48-492.12 versus 63.56-253.34, ALT (U/L) 4.54-23.69 versus 5.1-20.03, AST 15.7- 39.1 versus 13.3- 28.5, Urea (mg/dL) 9.33-24.99 versus 7.43-23.11, and Creatinine (mg/dL) 0.393-0.957 versus 0.301-0.846. The combined RIs for Total Protein (g/dL) were 6.08-7.85, ALB (g/dL) 4.42-5.46, FBS(mg/dL) 65-110, BIL.D (mg/dL) 0.033-0.532, and BIL.T (mg/dL) 0.106-0.812. Conclusions: The result showed a marked difference between sex and company-derived values for selected clinical chemistry parameters. Thus, the use of age and sex-specific locally established reference intervals for clinical chemistry parameters is recommended.

Keywords: reference interval, adolescent, clinical chemistry, Ethiopia

Procedia PDF Downloads 85
1211 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System

Authors: M. L. Anitha, K. A. Radhakrishna Rao

Abstract:

With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.

Keywords: biometrics, hand geometry features, inner knuckle print, recognition

Procedia PDF Downloads 224
1210 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 161
1209 Liquefaction Potential Assessment Using Screw Driving Testing and Microtremor Data: A Case Study in the Philippines

Authors: Arturo Daag

Abstract:

The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is enhancing its liquefaction hazard map towards a detailed probabilistic approach using SDS and geophysical data. Target sites for liquefaction assessment are public schools in Metro Manila. Since target sites are in highly urbanized-setting, the objective of the project is to conduct both non-destructive geotechnical studies using Screw Driving Testing (SDFS) combined with geophysical data such as refraction microtremor array (ReMi), 3 component microtremor Horizontal to Vertical Spectral Ratio (HVSR), and ground penetrating RADAR (GPR). Initial test data was conducted in liquefaction impacted areas from the Mw 6.1 earthquake in Central Luzon last April 22, 2019 Province of Pampanga. Numerous accounts of liquefaction events were documented areas underlain by quaternary alluvium and mostly covered by recent lahar deposits. SDS estimated values showed a good correlation to actual SPT values obtained from available borehole data. Thus, confirming that SDS can be an alternative tool for liquefaction assessment and more efficient in terms of cost and time compared to SPT and CPT. Conducting borehole may limit its access in highly urbanized areas. In order to extend or extrapolate the SPT borehole data, non-destructive geophysical equipment was used. A 3-component microtremor obtains a subsurface velocity model in 1-D seismic shear wave velocity of the upper 30 meters of the profile (Vs30). For the ReMi, 12 geophone array with 6 to 8-meter spacing surveys were conducted. Microtremor data were computed through the Factor of Safety, which is the quotient of Cyclic Resistance Ratio (CRR) and Cyclic Stress Ratio (CSR). Complementary GPR was used to study the subsurface structure and used to inferred subsurface structures and groundwater conditions.

Keywords: screw drive testing, microtremor, ground penetrating RADAR, liquefaction

Procedia PDF Downloads 206
1208 Microclimate Impacts on Solar Panel Power Generation in Midlands Area, UK

Authors: Stamatis Zoras, Boris Ceranic, Ashley Redfern

Abstract:

Green House Gas emissions from domestic properties currently account for a substantial part of the total UK’s carbon emissions and is a priority area for UK to reach zero carbon emissions. However, GHG emissions of urban complexes depend on building, road, structural developments etc surfaces that form urban microclimate. This in turn may further influence renewable energy system power generation that depend on solar or wind potential. Moreover, urban climatic conditions are also influenced by the installation of those power generation systems that may impact their own power generation efficiency. Increased air temperature is attributed to densely installed roof based solar panels that consequently impact their own production efficiency. Installation of roof based solar panels requires adequate guidance to enable housing businesses, councils and organisations to implement sufficient measures for improved power generation in relation to local urban microclimate. How microclimate is affected and how, in return, it affects solar power productivity. Derby Council & Derby Homes have been collecting solar panel power generation data for a large number of properties. The different building areas and system operation performance will be studied against microclimate conditions through time. It is envisaged that the outcomes of the study will support a working up strategy for Derby city to ensure that owned homes would be able to access information and data of solar photo voltaic PV and solar thermal panels potential on social housing, helping residents on low incomes create their own green energy to power their homes and heat their homeshot water.

Keywords: microclimate, solar power, urban climatology, urban morphology

Procedia PDF Downloads 74
1207 University of Sciences and Technology of Oran Mohamed Boudiaf (USTO-MB)

Authors: Patricia Mikchaela D. L. Feliciano, Ciela Kadeshka A. Fuentes, Bea Trixia B. Gales, Ethel Princess A. Gepulango, Martin R. Hernandez, Elina Andrea S. Lantion, Jhoe Cynder P. Legaspi, Peter F. Quilala, Gina C. Castro

Abstract:

Propolis is a resin-like material used by bees to fill large gap holes in the beehive. It has been found to possess anti-inflammatory property, which stimulates hair growth in rats by inducing hair keratinocytes proliferation, causing water retention and preventing damage caused by heat, ultraviolet rays, and other microorganisms without abnormalities in hair follicles. The present study aimed to formulate 10% and 30% Propolis Hair Cream for use in enhancing hair properties. Raw propolis sample was tested for heavy metals using Atomic Absorption Spectroscopy; zinc and chromium were found to be present. Likewise, propolis was extracted in a percolator using 70% ethanol and concentrated under vacuum using a rotary evaporator. The propolis extract was analyzed for total flavonoid content. Compatibility of the propolis extract with excipients was evaluated using Differential Scanning Calorimetry (DSC). No significant changes in organoleptic properties, pH and viscosity of the formulated creams were noted after four weeks of storage at 2-8°C, 30°C, and 40°C. The formulated creams were found to be non-irritating based on the Modified Draize Rabbit Test. In vivo efficacy was evaluated based on thickness and tensile strength of hair grown on previously shaved rat skin. Results show that the formulated 30% propolis-based cream had greater hair enhancing properties than the 10% propolis cream, which had a comparable effect with minoxidil.

Keywords: atomic absorption spectroscopy, differential scanning calorimetry (DSC), modified draize rabbit test, propolis

Procedia PDF Downloads 351
1206 Improvement of Resistance Features of Anti- Mic Polyaspartic Coating (DTM) Using Nano Silver Particles by Preventing Biofilm Formation

Authors: Arezoo Assarian, Reza Javaherdashti

Abstract:

Microbiologically influenced corrosion (MIC) is an electrochemical process that can affect both metals and non-metals. The cost of MIC can amount to 40% of the cost of corrosion. MIC is enhanced via factors such as but not limited to the presence of certain bacteria and archaea as well as mechanisms such as external electron transfer. There are five methods by which electrochemical corrosion, including MIC, can be prevented, of which coatings are an effective method due to blinding anode, cathode and, electrolyte from each other. Conventional ordinary coatings may themselves become nutrient sources for the bacteria and therefore show low efficiency in dealing with MIC. Recently our works on polyaspartic coating (DTM) have shown promising results, therefore nominating DTM as the most appropriate coating material to manage both MIC and general electrochemical corrosion very efficiently. Nanosilver particles are known for their antimicrobial properties that make them of desirable distractive impacts on any germs. This coating will be formulated based on Nanosilver phosphate and copper II oxide in the resin network and co-reactant. The nanoparticles are light and heat-sensitive agents. The method which is used to keep nanoparticles in the film coating is the encapsulation of active ingredients. By this method, it will prevent incompatibility between different particles. For producing microcapsules, the interfacial cross-linking method will be used. This is achieved by adding an active ingredient to an aqueous solution of the cross-linkable polymer. In this paper, we will first explain the role of coating materials in controlling and preventing electrochemical corrosion. We will explain MIC and some of its fundamental principles, such as bacteria establishment (biofilm) and the role they play in enhancing corrosion via mechanisms such as the establishment of differential aeration cells. Later we will explain features of DTM coatings that highly contribute to preventing biofilm formation and thus microbial corrosion.

Keywords: biofilm, corrosion, microbiologically influenced corrosion(MIC), nanosilver particles, polyaspartic coating (DTM)

Procedia PDF Downloads 171
1205 Housing Precarity and Pathways: Lived Experiences Among Bangladeshi Migrants in Dublin

Authors: Mohammad Altaf Hossain

Abstract:

A growing body of literature in urban studies has presented that urban precarity has been a lived experience for low-income groups of people in the cities of the Global South. It does not necessarily mean that cities in the Global North, where advanced capitalist economies exist, avoided the adverse realities of urban precarity. As a multifaceted condition, it creates other associated precariousness in lives -for example, economic deprivation, mental stress, and housing precarity. The interrelations between urbanity and precarity have been ubiquitous regardless of the developed and developing countries. People, mainly manual labourers with low incomes, go through uncertainties in every aspect of life. By analysing qualitative data and embracing structure-agency interaction, this paper intends to present how Bangladeshi migrants experience housing precarity in Dublin. Continued population growth and political economy factors such as labour market inequality, financialisation of the private rental sector, and the impact of cuts to government funding for social housing provision are combined to produce a housing supply crisis, affordability, and access in the city. As a result, low-income people practice informality in securing jobs and housing. The macro-structural components of this analysis include the Irish housing policy, the European labour market, the immigration policy, and the financialised housing market. The micro-structural components of South Asian communities’ experiences include social networks and social class. Access to social networks and practices of informality play a significant role in enabling them to negotiate urban precarity, including housing crises and income insecurity. In some cases, the collective agency of ethnic diaspora communities plays a vital role in negotiating with structural constraints.

Keywords: housing precarity, housing pathways, migration, agency, Dublin

Procedia PDF Downloads 31
1204 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 110
1203 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 324
1202 Gap Formation into Bulk InSb Crystals Grown by the VDS Technique Revealing Enhancement in the Transport Properties

Authors: Dattatray Gadkari, Dilip Maske, Manisha Joshi, Rashmi Choudhari, Brij Mohan Arora

Abstract:

The vertical directional solidification (VDS) technique has been applied to the growth of bulk InSb crystals. The concept of practical stability is applied to the case of detached bulk crystal growth on earth in a simplified design. By optimization of the set up and growth parameters, 32 ingots of 65-75 mm in length and 10-22 mm in diameter have been grown. The results indicate that the wetting angle of the melt on the ampoule wall and the pressure difference across the interface are the crucial factors effecting the meniscus shape and stability. Taking into account both heat transfer and capillarity, it is demonstrated that the process is stable in case of convex menisci (seen from melt), provided that pressure fluctuations remain in a stable range. During the crystal growth process, it is necessary to keep a relationship between the rate of the difference pressure controls and the solidification to maintain the width of gas gap. It is concluded that practical stability gives valuable knowledge of the dynamics and could be usefully applied to other crystal growth processes, especially those involving capillary shaping. Optoelectronic properties were investigated in relation to the type of solidification attached and detached ingots growth. These samples, room temperature physical properties such as Hall mobility, FTIR, Raman spectroscopy and microhardness achieved for antimonide samples grown by VDS technique have shown the highest values gained till at this time. These results reveal that these crystals can be used to produce InSb with high mobility for device applications.

Keywords: alloys, electronic materials, semiconductors, crystal growth, solidification, etching, optical microscopy, crystal structure, defects, Hall effect

Procedia PDF Downloads 422
1201 Fabrication and Characterization of Ceramic Matrix Composite

Authors: Yahya Asanoglu, Celaletdin Ergun

Abstract:

Ceramic-matrix composites (CMC) have significant prominence in various engineering applications because of their heat resistance associated with an ability to withstand the brittle type of catastrophic failure. In this study, specific raw materials have been chosen for the purpose of having suitable CMC material for high-temperature dielectric applications. CMC material will be manufactured through the polymer infiltration and pyrolysis (PIP) method. During the manufacturing process, vacuum infiltration and autoclave will be applied so as to decrease porosity and obtain higher mechanical properties, although this advantage leads to a decrease in the electrical performance of the material. Time and temperature adjustment in pyrolysis parameters provide a significant difference in the properties of the resulting material. The mechanical and thermal properties will be investigated in addition to the measurement of dielectric constant and tangent loss values within the spectrum of Ku-band (12 to 18 GHz). Also, XRD, TGA/PTA analyses will be employed to prove the transition of precursor to ceramic phases and to detect critical transition temperatures. Additionally, SEM analysis on the fracture surfaces will be performed to see failure mechanism whether there is fiber pull-out, crack deflection and others which lead to ductility and toughness in the material. In this research, the cost-effectiveness and applicability of the PIP method will be proven in the manufacture of CMC materials while optimization of pyrolysis time, temperature and cycle for specific materials is detected by experiment. Also, several resins will be shown to be a potential raw material for CMC radome and antenna applications. This research will be distinguished from previous related papers due to the fact that in this research, the combination of different precursors and fabrics will be experimented with to specify the unique cons and pros of each combination. In this way, this is an experimental sum of previous works with unique PIP parameters and a guide to the manufacture of CMC radome and antenna.

Keywords: CMC, PIP, precursor, quartz

Procedia PDF Downloads 163
1200 Sources and Potential Ecological Risks of Heavy Metals in the Sediment Samples From Coastal Area in Ondo, Southwest Nigeria

Authors: Ogundele Lasun Tunde, Ayeku Oluwagbemiga Patrick

Abstract:

Heavy metals are released into the sediments in aquatic environment from both natural and anthropogenic sources and they are considered as worldwide issue due to their deleterious ecological risks and food chain disruption. In this study, sediments samples were collected at three major sites (Awoye, Abereke and Ayetoro) along Ondo coastal area using VanVeen grab sampler. The concentrations of As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn were determined by employing Atomic Absorption Spectroscopy (AAS). The combined concentrations data were subjected to Positive Matrix Factorization (PMF) receptor approach for source identification and apportionment. The probable risks that might be posed by heavy metals in the sediment were estimated by potential and integrated ecological risks indices. Among the measured heavy metals, Fe had the average concentrations of 20.38 ± 2.86, 23.56 ± 4.16 and 25.32 ± 4.83 lg/g at Abereke, Awoye and Ayetoro sites, respectively. The PMF resulted in identification of four sources of heavy metals in the sediments. The resolved sources and their percentage contributions were oil exploration (39%), industrial waste/sludge (35%), detrital process (18%) and Mn-sources (8%). Oil exploration activities and industrial wastes are the major sources that contribute heavy metals into the coastal sediments. The major pollutants that posed ecological risks to the local aquatic ecosystem are As, Pb, Cr and Cd (40 B Ei ≤ 80) classifying the sites as moderate risk. The integrate risks values of Awoye, Abereke and Ayetoro are 231.2, 234.0 and 236.4, respectively suggesting that the study areas had a moderate ecological risk. The study showed the suitability of PMF receptor model for source identification of heavy metals in the sediments. Also, the intensive anthropogenic activities and natural sources could largely discharge heavy metals into the study area, which may increase the heavy metal contents of the sediments and further contribute to the associated ecological risk, thus affecting the local aquatic ecosystem.

Keywords: positive matrix factorization, sediments, heavy metals, sources, ecological risks

Procedia PDF Downloads 28
1199 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 145
1198 A New Binder Mineral for Cement Stabilized Road Pavements Soils

Authors: Aydın Kavak, Özkan Coruk, Adnan Aydıner

Abstract:

Long-term performance of pavement structures is significantly impacted by the stability of the underlying soils. In situ subgrades often do not provide enough support required to achieve acceptable performance under traffic loading and environmental demands. NovoCrete® is a powder binder-mineral for cement stabilized road pavements soils. NovoCrete® combined with Portland cement at optimum water content increases the crystallize formations during the hydration process, resulting in higher strengths, neutralizes pH levels, and provides water impermeability. These changes in soil properties may lead to transforming existing unsuitable in-situ materials into suitable fill materials. The main features of NovoCrete® are: They are applicable to all types of soil, reduce premature cracking and improve soil properties, creating base and subbase course layers with high bearing capacity by reducing hazardous materials. It can be used also for stabilization of recyclable aggregates and old asphalt pavement aggregate, etc. There are many applications in Germany, Turkey, India etc. In this paper, a few field application in Turkey will be discussed. In the road construction works, this binder material is used for cement stabilization works. In the applications 120-180 kg cement is used for 1 m3 of soil with a 2 % of binder NovoCrete® material for the stabilization. The results of a plate loading test in a road construction site show 1 mm deformation which is very small under 7 kg/cm2 loading. The modulus of subgrade reaction increase from 611 MN/m3 to 3673 MN/m3.The soaked CBR values for stabilized soils increase from 10-20 % to 150-200 %. According to these data weak subgrade soil can be used as a base or sub base after the modification. The potential reduction in the need for quarried materials will help conserve natural resources. The use of on-site or nearby materials in fills, will significantly reduce transportation costs and provide both economic and environmental benefits.

Keywords: soil, stabilization, cement, binder, Novocrete, additive

Procedia PDF Downloads 226
1197 Explanatory Analysis the Effect of Urban Form and Monsoon on Cooling Effect of Blue-Green Spaces: A Case Study in Singapore

Authors: Yangyang Zhou

Abstract:

Rapid urbanization has caused the urban heat island effect, which will threaten the physical and mental health of urban dwellers, and blue-green spaces can mitigate the thermal environment effectively. In this study, we calculated the average LST from 2013 to 2022, Northeastmonsoon and Southwestmonsoon of Singapore, and compared the cooling effect differences of the four blue-green spaces. Then, spatial correlation and spatial autoregression model were conducted between cooling distance intensity (CDI) and 11 independent variables. The results reveal that (1) the highest mean land surface temperature (LST) in all years, Northeast monsoon and Southwest monsoon can reach 42.8 ℃, 41.6 ℃, and 42.9 ℃, respectively. (2) the temperature-changing tendency in the three time periods is similar to each other, while the overall LST changing trends of the Southwest monsoon are lower than all year and Northeast monsoon. (3) the cooling distance of the sea can reach 1200 m, and CEI is highly positively correlated with NDBI and BuildD and highly negatively correlated with SVF, NDVI and TreeH. LISA maps showed that the zones that passed the significance test between CDI, NDBI and BuildD were nearly the same locations; the same phenomenon also happened between CDI and SVF, NDVI and TreeH. (4) SLM had better regression results than SEM in all the regions; only 3 independent variables passed the significance test in region 1, and most independent variables can pass the significance test in other regions. Variables DIST and NDBI were significantly affecting the CDI in all the regions. In the whole region, all the variables passed the significance test, and NDBI (1.61), SVF (0.95) and NDVI (0.5) had the strongest influence on CDI.

Keywords: cooling effect, land surface temperature, thermal environment mitigation, spatial autoregression model

Procedia PDF Downloads 30
1196 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability

Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto

Abstract:

Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.

Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT

Procedia PDF Downloads 799
1195 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 122
1194 Sustainable Use of Agricultural Waste to Enhance Food Security and Conserve the Environment

Authors: M. M. Tawfik, Ezzat M. Abd El Lateef, B. B. Mekki, Amany A. Bahr, Magda H. Mohamed, Gehan S. Bakhoom

Abstract:

The rapid increase in the world’s population coupled by decrease the arable land per capita has resulted into an increased demand for food which has in turn led to the production of large amounts of agricultural wastes, both at the farmer, municipality and city levels. Agricultural wastes can be a valuable resource for improving food security. Unfortunately, agricultural wastes are likely to cause pollution to the environment or even harm to human health. This calls for increased public awareness on the benefits and potential hazards of agricultural wastes, especially in developing countries. Agricultural wastes (residual stalks, straw, leaves, roots, husks, shells etcetera) and animal waste (manures) are widely available, renewable and virtually free, hence they can be an important resource. They can be converted into heat, steam, charcoal, methanol, ethanol, bio diesel as well as raw materials (animal feed, composting, energy and biogas construction etcetera). agricultural wastes are likely to cause pollution to the environment or even harm to human health, if it is not used in a sustainable manner. Organic wastes could be considered an important source of biofertilizer for enhancing food security in the small holder farming communities that would not afford use of expensive inorganic fertilizers. Moreover, these organic wastes contain high levels of nitrogen, phosphorus, potassium, and organic matter important for improving nutrient status of soils in urban agriculture. Organic compost leading to improved crop yields and its nutritional values as compared with inorganic fertilization. This paper briefly reviews how agricultural wastes can be used to enhance food security and conserve the environment.

Keywords: agricultural waste, organic compost, environment, valuable resources

Procedia PDF Downloads 525
1193 Agricultural Organized Areas Approach for Resilience to Droughts, Nutrient Cycle and Rural and Wild Fires

Authors: Diogo Pereira, Maria Moura, Joana Campos, João Nunes

Abstract:

As the Ukraine war highlights the European Economic Area’s vulnerability and external dependence on feed and food, agriculture gains significant importance. Transformative change is necessary to reach a sustainable and resilient agricultural sector. Agriculture is an important drive for bioeconomy and the equilibrium and survival of society and rural fires resilience. The pressure of (1) water stress, (2) nutrient cycle, and (3) social demographic evolution towards 70% of the population in Urban systems and the aging of the rural population, combined with climate change, exacerbates the problem and paradigm of rural and wildfires, especially in Portugal. The Portuguese territory is characterized by (1) 28% of marginal land, (2) the soil quality of 70% of the territory not being appropriate for agricultural activity, (3) a micro smallholding, with less than 1 ha per proprietor, with mainly familiar and traditional agriculture in the North and Centre regions, and (4) having the most vulnerable areas for rural fires in these same regions. The most important difference between the South, North and Centre of Portugal, referring to rural and wildfires, is the agricultural activity, which has a higher level in the South. In Portugal, rural and wildfires represent an average annual economic loss of around 800 to 1000 million euros. The WinBio model is an agrienvironmental metabolism design, with the capacity to create a new agri-food metabolism through Agricultural Organized Areas, a privatepublic partnership. This partnership seeks to grow agricultural activity in regions with (1) abandoned territory, (2) micro smallholding, (3) water and nutrient management necessities, and (4) low agri-food literacy. It also aims to support planning and monitoring of resource use efficiency and sustainability of territories, using agriculture as a barrier for rural and wildfires in order to protect rural population.

Keywords: agricultural organized areas, residues, climate change, drought, nutrients, rural and wild fires

Procedia PDF Downloads 83
1192 Combined Tarsal Coalition Resection and Arthroereisis in Treatment of Symptomatic Rigid Flat Foot in Pediatric Population

Authors: Michael Zaidman, Naum Simanovsky

Abstract:

Introduction. Symptomatic tarsal coalition with rigid flat foot often demands operative solution. An isolated coalition resection does not guarantee pain relief; correction of co-existing foot deformity may be required. The objective of the study was to analyze the results of combination of tarsal coalition resection and arthroereisis. Patients and methods. We retrospectively reviewed medical records and radiographs of children operatively treated in our institution for symptomatic calcaneonavicular or talocalcaneal coalition between the years 2019 and 2022. Eight patients (twelve feet), 4 boys and 4 girls with mean age 11.2 years, were included in the study. In six patients (10 feet) calcaneonavicular coalition was diagnosed, two patients (two feet) sustained talonavicular coalition. To quantify degrees of foot deformity, we used calcaneal pitch angle, lateral talar-first metatarsal (Meary's) angle, and talonavicular coverage angle. The clinical results were assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results. The mean follow-up was 28 month. The preoperative mean talonavicular coverage angle was 17,75º as compared with postoperative mean angle of 5.4º. The calcaneal pitch angle improved from mean 6,8º to 16,4º. The mean preoperative Meary’s angle of -11.3º improved to mean 2.8º. The preoperative mean AOFAS score improved from 54.7 to 93.1 points post-operatively. In nine of twelve feet, overall clinical outcome judged by AOFAS scale was excellent (90-100 points), in three feet was good (80-90 points). Six patients (ten feet) obviously improved their subtalar range of motion. Conclusion. For symptomatic stiff or rigid flat feet associated with tarsal coalition, the combination of coalition resection and arthroereisis leads to normalization of radiographic parameters, clinical and functional improvement with good patient’s satisfaction and likely to be more effective than the isolated procedures.

Keywords: rigid flat foot, tarsal coalition resection, arthroereisis, outcome

Procedia PDF Downloads 66
1191 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method

Authors: Isadora Bugarin, Taygoara F. de Oliveira

Abstract:

Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.

Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity

Procedia PDF Downloads 124
1190 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 135
1189 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 284
1188 Reconstruction Post-mastectomy: A Literature Review on Its Indications and Techniques

Authors: Layaly Ayoub, Mariana Ribeiro

Abstract:

Introduction: Breast cancer is currently considered the leading cause of cancer-related deaths among women in Brazil. Mastectomy, essential in this treatment, often necessitates subsequent breast reconstruction to restore physical appearance and aid in the emotional and psychological recovery of patients. The choice between immediate or delayed reconstruction is influenced by factors such as the type and stage of cancer, as well as the patient's overall health. The decision between autologous breast reconstruction or implant-based reconstruction requires a detailed analysis of individual conditions and needs. Objectives: This study analyzes the techniques and indications used in post-mastectomy breast reconstruction. Methodology: Literature review conducted in the PubMed and SciELO databases, focusing on articles that met the inclusion and exclusion criteria and descriptors. Results: After mastectomy, breast reconstruction is commonly performed. It is necessary to determine the type of technique to be used in each case depending on the specific characteristics of each patient. The tissue expander technique is indicated for patients with sufficient skin and tissue post-mastectomy, who do not require additional radiotherapy, and who opt for a less complex surgery with a shorter recovery time. This procedure promotes the gradual expansion of soft tissues where the definitive implant will be placed. Both temporary and permanent expanders offer flexibility, allowing for adjustment in the expander size until the desired volume is reached, enabling the skin and tissues to adapt to the breast implant area. Conversely, autologous reconstruction is indicated for patients who will undergo radiotherapy, have insufficient tissue, and prefer a more natural solution. This technique uses the transverse rectus abdominis muscle (TRAM) flap, the latissimus dorsi muscle flap, the gluteal flap, and local muscle flaps to shape a new breast, potentially combined with a breast implant. Conclusion: In this context, it is essential to conduct a thorough evaluation regarding the technique to be applied, as both have their benefits and challenges.

Keywords: indications, post-mastectomy, breast reconstruction, techniques

Procedia PDF Downloads 32
1187 Development of a Two-Step 'Green' Process for (-) Ambrafuran Production

Authors: Lucia Steenkamp, Chris V. D. Westhuyzen, Kgama Mathiba

Abstract:

Ambergris, and more specifically its oxidation product (–)-ambrafuran, is a scarce, valuable, and sought-after perfumery ingredient. The material is used as a fixative agent to stabilise perfumes in formulations by reducing the evaporation rate of volatile substances. Ambergris is a metabolic product of the sperm whale (Physeter macrocephatus L.), resulting from intestinal irritation. Chemically, (–)-ambrafuran is produced from the natural product sclareol in eight synthetic steps – in the process using harsh and often toxic chemicals to do so. An overall yield of no more than 76% can be achieved in some routes, but generally, this is lower. A new 'green' route has been developed in our laboratory in which sclareol, extracted from the Clary sage plant, is converted to (–)-ambrafuran in two steps with an overall yield in excess of 80%. The first step uses a microorganism, Hyphozyma roseoniger, to bioconvert sclareol to an intermediate diol using substrate concentrations up to 50g/L. The yield varies between 90 and 67% depending on the substrate concentration used. The purity of the diol product is 95%, and the diol is used without further purification in the next step. The intermediate diol is then cyclodehydrated to the final product (–)-ambrafuran using a zeolite, which is not harmful to the environment and is readily recycled. The yield of the product is 96%, and following a single recrystallization, the purity of the product is > 99.5%. A preliminary LC-MS study of the bioconversion identified several intermediates produced in the fermentation broth under oxygen-restricted conditions. Initially, a short-lived ketone is produced in equilibrium with a more stable pyranol, a key intermediate in the process. The latter is oxidised under Norrish type I cleavage conditions to yield an acetate, which is hydrolysed either chemically or under lipase action to afford the primary fermentation product, an intermediate diol. All the intermediates identified point to the likely CYP450 action as the key enzyme(s) in the mechanism. This invention is an exceptional example of how the power of biocatalysis, combined with a mild, benign chemical step, can be deployed to replace a total chemical synthesis of a specific chiral antipode of a commercially relevant material.

Keywords: ambrafuran, biocatalysis, fragrance, microorganism

Procedia PDF Downloads 237
1186 Effects of Branched-Chain Amino Acid Supplementation on Sarcopenic Patients with Liver Cirrhosis

Authors: Deepak Nathiya1, Ramesh Roop Rai, Pratima Singh1, Preeti Raj1, Supriya Suman, Balvir Singh Tomar

Abstract:

Background: Sarcopenia is a catabolic state in liver cirrhosis (LC), accelerated with a breakdown of skeletal muscle to release amino acids which adversely affects survival, health-related quality of life, and response to any underlying disease. The primary objective of the study was to investigate the long-term effect of branched-chain amino acids (BCAA) supplementations on parameters associated with improved prognosis in sarcopenic patients with LC, as well as to evaluate its impact on cirrhotic-related events. Methods: We carried out a 24 week, single-center, randomized, open-label, controlled, two cohort parallel-group intervention trial comparing the efficacy of BCAA against lactoalbumin (L-ALB) on 106 sarcopenic liver cirrhotics. The BCAA (intervention) group was treated with 7.2 g BCAA per whereas, the lactoalbumin group was also given 6.3 g of L-Albumin. The primary outcome was to assess the impact of BCAA on parameters of sarcopenia: muscle mass, muscle strength, and physical performance. The secondary outcomes were to study combined survival and maintenance of liver function changes in laboratory and clinical markers in the duration of six months. Results: Treatment with BCAA leads to significant improvement in sarcopenic parameters: muscle strength, muscle function, and muscle mass. The total cirrhotic-related complications and cumulative event-free survival occurred fewer in the BCAA group than in the L-ALB group. Prognostic markers also improved significantly in the study. Conclusion: The current clinical trial demonstrated that long-term BCAAs supplementation improved sarcopenia and prognostic markers in patients with advanced liver cirrhosis.

Keywords: sarcopenia, liver cirrhosis, BCAA, quality of life

Procedia PDF Downloads 144
1185 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 396