Search results for: combination of aluminum honeycomb panel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4525

Search results for: combination of aluminum honeycomb panel

85 Exploring Empathy Through Patients’ Eyes: A Thematic Narrative Analysis of Patient Narratives in the UK

Authors: Qudsiya Baig

Abstract:

Empathy yields an unparalleled therapeutic value within patient physician interactions. Medical research is inundated with evidence to support that a physician’s ability to empathise with patients leads to a greater willingness to report symptoms, an improvement in diagnostic accuracy and safety, and a better adherence and satisfaction with treatment plans. Furthermore, the Institute of Medicine states that empathy leads to a more patient-centred care, which is one of the six main goals of a 21st century health system. However, there is a paradox between the theoretical significance of empathy and its presence, or lack thereof, in clinical practice. Recent studies have reported that empathy declines amongst students and physicians over time. The three most impactful contributors to this decline are: (1) disagreements over the definitions of empathy making it difficult to implement it into practice (2) poor consideration or regulation of empathy leading to burnout and thus, abandonment altogether, and (3) the lack of diversity in the curriculum and the influence of medical culture, which prioritises science over patient experience, limiting some physicians from using ‘too much’ empathy in the fear of losing clinical objectivity. These issues were investigated by conducting a fully inductive thematic narrative analysis of patient narratives in the UK to evaluate the behaviours and attitudes that patients associate with empathy. The principal enquiries underpinning this study included uncovering the factors that affected experience of empathy within provider-patient interactions and to analyse their effects on patient care. This research contributes uniquely to this discourse by examining the phenomenon of empathy directly from patients’ experiences, which were systematically extracted from a repository of online patient narratives of care titled ‘CareOpinion UK’. Narrative analysis was specifically chosen as the methodology to examine narratives from a phenomenological lens to focus on the particularity and context of each story. By enquiring beyond the superficial who-whatwhere, the study of narratives prescribed meaning to illness by highlighting the everyday reality of patients who face the exigent life circumstances created by suffering, disability, and the threat of life. The following six themes were found to be the most impactful in influencing the experience of empathy: dismissive behaviours, judgmental attitudes, undermining patients’ pain or concerns, holistic care and failures and successes of communication or language. For each theme there were overarching themes relating to either a failure to understand the patient’s perspective or a success in taking a person-centred approach. An in-depth analysis revealed that a lack of empathy was greatly associated with an emotive-cognitive imbalance, which disengaged physicians with their patients’ emotions. This study hereby concludes that competent providers require a combination of knowledge, skills, and more importantly empathic attitudes to help create a context for effective care. The crucial elements of that context involve (a) identifying empathy clues within interactions to engage with patients’ situations, (b) attributing a perspective to the patient through perspective-taking and (c) adapting behaviour and communication according to patient’s individual needs. Empathy underpins that context, as does an appreciation of narrative, and the two are interrelated.

Keywords: empathy, narratives, person-centred, perspective, perspective-taking

Procedia PDF Downloads 103
84 Satellite Connectivity for Sustainable Mobility

Authors: Roberta Mugellesi Dow

Abstract:

As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.

Keywords: sustainability, connectivity, mobility, satellites

Procedia PDF Downloads 102
83 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 320
82 Temperature Distribution Inside Hybrid photovoltaic-Thermoelectric Generator Systems and their Dependency on Exposition Angles

Authors: Slawomir Wnuk

Abstract:

Due to widespread implementation of the renewable energy development programs the, solar energy use increasing constantlyacross the world. Accordingly to REN21, in 2020, both on-grid and off-grid solar photovoltaic systems installed capacity reached 760 GWDCand increased by 139 GWDC compared to previous year capacity. However, the photovoltaic solar cells used for primary solar energy conversion into electrical energy has exhibited significant drawbacks. The fundamentaldownside is unstable andlow efficiencythe energy conversion being negatively affected by a rangeof factors. To neutralise or minimise the impact of those factors causing energy losses, researchers have come out withvariedideas. One ofpromising technological solutionsoffered by researchers is PV-MTEG multilayer hybrid system combiningboth photovoltaic cells and thermoelectric generators advantages. A series of experiments was performed on Glasgow Caledonian University laboratory to investigate such a system in operation. In the experiments, the solar simulator Sol3A series was employed as a stable solar irradiation source, and multichannel voltage and temperature data loggers were utilised for measurements. The two layer proposed hybrid systemsimulation model was built up and tested for its energy conversion capability under a variety of the exposure angles to the solar irradiation with a concurrent examination of the temperature distribution inside proposed PV-MTEG structure. The same series of laboratory tests were carried out for a range of various loads, with the temperature and voltage generated being measured and recordedfor each exposure angle and load combination. It was found that increase of the exposure angle of the PV-MTEG structure to an irradiation source causes the decrease of the temperature gradient ΔT between the system layers as well as reduces overall system heating. The temperature gradient’s reduction influences negatively the voltage generation process. The experiments showed that for the exposureangles in the range from 0° to 45°, the ‘generated voltage – exposure angle’ dependence is reflected closely by the linear characteristics. It was also found that the voltage generated by MTEG structures working with the optimal load determined and applied would drop by approximately 0.82% per each 1° degree of the exposure angle increase. This voltage drop occurs at the higher loads applied, getting more steep with increasing the load over the optimal value, however, the difference isn’t significant. Despite of linear character of the generated by MTEG voltage-angle dependence, the temperature reduction between the system structure layers andat tested points on its surface was not linear. In conclusion, the PV-MTEG exposure angle appears to be important parameter affecting efficiency of the energy generation by thermo-electrical generators incorporated inside those hybrid structures. The research revealedgreat potential of the proposed hybrid system. The experiments indicated interesting behaviour of the tested structures, and the results appear to provide valuable contribution into thedevelopment and technological design process for large energy conversion systems utilising similar structural solutions.

Keywords: photovoltaic solar systems, hybrid systems, thermo-electrical generators, renewable energy

Procedia PDF Downloads 63
81 Post-bladder Catheter Infection

Authors: Mahla Azimi

Abstract:

Introduction: Post-bladder catheter infection is a common and significant healthcare-associated infection that affects individuals with indwelling urinary catheters. These infections can lead to various complications, including urinary tract infections (UTIs), bacteremia, sepsis, and increased morbidity and mortality rates. This article aims to provide a comprehensive review of post-bladder catheter infections, including their causes, risk factors, clinical presentation, diagnosis, treatment options, and preventive measures. Causes and Risk Factors: Post-bladder catheter infections primarily occur due to the colonization of microorganisms on the surface of the urinary catheter. The most common pathogens involved are Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Enterococcus species. Several risk factors contribute to the development of these infections, such as prolonged catheterization duration, improper insertion technique, poor hygiene practices during catheter care, compromised immune system function in patients with underlying conditions or immunosuppressive therapy. Clinical Presentation: Patients with post-bladder catheter infections may present with symptoms such as fever, chills, malaise, suprapubic pain or tenderness, and cloudy or foul-smelling urine. In severe cases or when left untreated for an extended period of time, patients may develop more severe symptoms like hematuria or signs of systemic infection. Diagnosis: The diagnosis of post-bladder catheter infection involves a combination of clinical evaluation and laboratory investigations. Urinalysis is crucial in identifying pyuria (presence of white blood cells) and bacteriuria (presence of bacteria). A urine culture is performed to identify the causative organism(s) and determine its antibiotic susceptibility profile. Treatment Options: Prompt initiation of appropriate antibiotic therapy is essential in managing post-bladder catheter infections. Empirical treatment should cover common pathogens until culture results are available. The choice of antibiotics should be guided by local antibiogram data to ensure optimal therapy. In some cases, catheter removal may be necessary, especially if the infection is recurrent or associated with severe complications. Preventive Measures: Prevention plays a vital role in reducing the incidence of post-bladder catheter infections. Strategies include proper hand hygiene, aseptic technique during catheter insertion and care, regular catheter maintenance, and timely removal of unnecessary catheters. Healthcare professionals should also promote patient education regarding self-care practices and signs of infection. Conclusion: Post-bladder catheter infections are a significant healthcare concern that can lead to severe complications and increased healthcare costs. Early recognition, appropriate diagnosis, and prompt treatment are crucial in managing these infections effectively. Implementing preventive measures can significantly reduce the incidence of post-bladder catheter infections and improve patient outcomes. Further research is needed to explore novel strategies for prevention and management in this field.

Keywords: post-bladder catheter infection, urinary tract infection, bacteriuria, indwelling urinary catheters, prevention

Procedia PDF Downloads 53
80 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 240
79 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 45
78 Factors Influencing Consumer Adoption of Digital Banking Apps in the UK

Authors: Sevelina Ndlovu

Abstract:

Financial Technology (fintech) advancement is recognised as one of the most transformational innovations in the financial industry. Fintech has given rise to internet-only digital banking, a novel financial technology advancement, and innovation that allows banking services through internet applications with no need for physical branches. This technology is becoming a new banking normal among consumers for its ubiquitous and real-time access advantages. There is evident switching and migration from traditional banking towards these fintech facilities, which could possibly pose a systemic risk if not properly understood and monitored. Fintech advancement has also brought about the emergence and escalation of financial technology consumption themes such as trust, security, perceived risk, and sustainability within the banking industry, themes scarcely covered in existing theoretic literature. To that end, the objective of this research is to investigate factors that determine fintech adoption and propose an integrated adoption model. This study aims to establish what the significant drivers of adoption are and develop a conceptual model that integrates technological, behavioral, and environmental constructs by extending the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). It proposes integrating constructs that influence financial consumption themes such as trust, perceived risk, security, financial incentives, micro-investing opportunities, and environmental consciousness to determine the impact of these factors on the adoption and intention to use digital banking apps. The main advantage of this conceptual model is the consolidation of a greater number of predictor variables that can provide a fuller explanation of the consumer's adoption of digital banking Apps. Moderating variables of age, gender, and income are incorporated. To the best of author’s knowledge, this study is the first that extends the UTAUT2 model with this combination of constructs to investigate user’s intention to adopt internet-only digital banking apps in the UK context. By investigating factors that are not included in the existing theories but are highly pertinent to the adoption of internet-only banking services, this research adds to existing knowledge and extends the generalisability of the UTAUT2 in a financial services adoption context. This is something that fills a gap in knowledge, as highlighted to needing further research on UTAUT2 after reviewing the theory in 2016 from its original version of 2003. To achieve the objectives of this study, this research assumes a quantitative research approach to empirically test the hypotheses derived from existing literature and pilot studies to give statistical support to generalise the research findings for further possible applications in theory and practice. This research is explanatory or casual in nature and uses cross-section primary data collected through a survey method. Convenient and purposive sampling using structured self-administered online questionnaires is used for data collection. The proposed model is tested using Structural Equation Modelling (SEM), and the analysis of primary data collected through an online survey is processed using Smart PLS software with a sample size of 386 digital bank users. The results are expected to establish if there are significant relationships between the dependent and independent variables and establish what the most influencing factors are.

Keywords: banking applications, digital banking, financial technology, technology adoption, UTAUT2

Procedia PDF Downloads 36
77 Sandstone-Hosted Copper Mineralization in Oligo-Miocene-Red-Bed Strata, Chalpo North East of Iran: Constraints from Lithostratigraphy, Lithogeochemistry, Mineralogy, Mass Change Technique, and Ree Distribution

Authors: Mostafa Feiz, Hossein Hadizadeh, Mohammad Safari

Abstract:

The Chalpo copper area is located in northeastern Iran, which is part of the structural zone of central Iran and the back-arc basin of Sabzevar. This sedimentary basin accumulated in destructive-Oligomiocene sediments is named the Nasr-Chalpo-Sangerd (NCS) basin. The sedimentary layers in this basin originated mainly from Upper Cretaceous ophiolitic rocks and intermediate to mafic-post ophiolitic volcanic rocks, deposited as a nonconformity. The mineralized sandstone layers in the Chalpo area include leached zones (with a thickness of 5 to 8 meters) and mineralized lenses with a thickness of 0.5 to 0.7 meters. Ore minerals include primary sulfide minerals, such as chalcocite, chalcopyrite, and pyrite, as well as secondary minerals, such as covellite, digenite, malachite, and azurite, formed in three stages that comprise primary, simultaneously, and supergene stage. The best agents that control the mineralization in this area include the permeability of host rocks, the presence of fault zones as the conduits for copper oxide solutions, and significant amounts of plant fossils, which create a reducing environment for the deposition of mineralized layers. Statistical studies on copper layers indicate that Ag, Cd, Mo, and S have the maximum positive correlation with Cu, whereas TiO₂, Fe₂O₃, Al₂O₃, Sc, Tm, Sn, and the REEs have a negative correlation. The calculations of mass changes on copper-bearing layers and primary sandstone layers indicate that Pb, As, Cd, Te, and Mo are enriched in the mineralized zones, whereas SiO₂, TiO₂, Fe₂O₃, V, Sr, and Ba are depleted. The combination of geological, stratigraphic, and geochemical studies suggests that the origin of copper may have been the underlying red strata that contained hornblende, plagioclase, biotite, alkaline feldspar, and labile minerals. Dehydration and hydrolysis of these minerals during the diagenetic process caused the leaching of copper and associated elements by circling fluids, which formed an oxidant-hydrothermal solution. Copper and silver in this oxidant solution might have moved upwards through the basin-fault zones and deposited in the reducing environments in the sandstone layers that have had abundant organic matters. Copper in these solutions probably was carried by chloride complexes. The collision of oxidant and reduced solutions caused the deposition of Cu and Ag, whereas some stable elements in oxidant environments (e.g., Fe₂O₃, TiO₂, SiO₂, REEs) become unstable in the reduced condition. Therefore, the copper-bearing sandstones in the study area are depleted from these elements resulting from the leaching process. The results indicate that during the mineralization stage, LREEs and MREEs were depleted, but Cu, Ag, and S were enriched. Based on field evidence, it seems that the circulation of connate fluids in the reb-bed strata, produced by diagenetic processes, encountered to reduced facies, which formed earlier by abundant fossil-plant debris in the sandstones, is the best model for precipitating sulfide-copper minerals.

Keywords: Chalpo, oligo-miocene red beds, sandstone-hosted copper mineralization, mass change, LREEs, MREEs

Procedia PDF Downloads 40
76 Diabetic Screening in Rural Lesotho, Southern Africa

Authors: Marie-Helena Docherty, Sion Edryd Williams

Abstract:

The prevalence of diabetes mellitus is increasing worldwide. In Sub-Saharan Africa, type 2 diabetes represents over 90% of all types of diabetes with the number of diabetic patients expected to rise. This represents a huge economic burden in an area already contending with high rates of other significant diseases, including the highest worldwide prevalence of HIV. Diabetic complications considerably impact on morbidity and mortality. The epidemiological data for the region quotes high rates of retinopathy (7-63%), neuropathy (27-66%) and microalbuminuria (10-83%). It is therefore imperative that diabetic screening programmes are established. It is recognised that in many parts of the developing world the implementation and management of such programmes is limited by a lack of available resources. The International Diabetes Federation produced guidelines in 2012 taking these limitations into account suggesting that all diabetic patients should have access to basic screening. These guidelines are consistent with the national diabetic guidelines produced by the Lesotho Medical Council. However, diabetic care in Lesotho is delivered at the local level, with variable levels of quality. A cross sectional study was performed in the outpatient department of Maluti Hospital in Mapoteng, Lesotho, a busy rural hospital in the Berea district. Demographic data on gender, age and modality of treatment were collected over a six-week time period. Information regarding 3 basic screening parameters was obtained. These parameters included eye screening (defined as a documented ophthalmology review within the last 12 months), foot screening (defined as a documented foot health assessment by any health care professional within the last 12 months) and secondary prevention (defined as a documented blood pressure and lipid profile reading within the last 12 months). These parameters were selected on the basis of the absolute minimum level of resources in Maluti Hospital. Renal screening was excluded, as the hospital does not have access to reliable renal profile checks or urinalysis. There is however a fully functioning on-site ophthalmology department run by a senior ophthalmologist with the ability to provide retinal photography, retinal surgery and photocoagulation therapy. Data was collected on 183 type 2 diabetics. 112 patients were male and 71 were female. The average age was 43 years. 4 patients were diet controlled, 140 patients were on oral hypoglycaemic agents (metformin and/or glibenclamide), and 39 patients were on a combination of insulin and oral hypoglycaemics. In the preceding 12 months, 5 patients had undergone eye screening (3%), 24 patients had undergone foot screening (13%), and 31 patients had lipid profile testing (17%). All patients had a documented blood pressure reading (100%). Our results show that screening is poorly performed in the basic indicators suggested by the IDF and the Lesotho Medical Council. On the basis of these results, a screening programme was developed using the mnemonic SaFE; secondary prevention, foot and eye care. This is simple, memorable and transferable between healthcare professionals. In the future, the expectation would be to expand upon this current programme to include renal screening, and to further develop screening pertaining to secondary prevention.

Keywords: Africa, complications, rural, screening

Procedia PDF Downloads 264
75 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 54
74 Addressing the Biocide Residue Issue in Museum Collections Already in the Planning Phase: An Investigation Into the Decontamination of Biocide Polluted Museum Collections Using the Temperature and Humidity Controlled Integrated Contamination Manageme

Authors: Nikolaus Wilke, Boaz Paz

Abstract:

Museum staff, conservators, restorers, curators, registrars, art handlers but potentially also museum visitors are often exposed to the harmful effects of biocides, which have been applied to collections in the past for the protection and preservation of cultural heritage. Due to stable light, moisture, and temperature conditions, the biocidal active ingredients were preserved for much longer than originally assumed by chemists, pest controllers, and museum scientists. Given the requirements to minimize the use and handling of toxic substances and the obligations of employers regarding safe working environments for their employees, but also for visitors, the museum sector worldwide needs adequate decontamination solutions. Today there are millions of contaminated objects in museums. This paper introduces the results of a systematic investigation into the reduction rate of biocide contamination in various organic materials that were treated with the humidity and temperature controlled ICM (Integrated Contamination Management) method. In the past, collections were treated with a wide range, at times even with a combination of toxins, either preventively or to eliminate active insect or fungi infestations. It was only later that most of those toxins were recognized as CMR (cancerogenic mutagen reprotoxic) substances. Among them were numerous chemical substances that are banned today because of their toxicity. While the biocidal effect of inorganic salts such as arsenic (arsenic(III) oxide), sublimate (mercury(II) chloride), copper oxychloride (basic copper chloride) and zinc chloride was known very early on, organic tar distillates such as paradichlorobenzene, carbolineum, creosote and naphthalene were increasingly used from the 19th century onwards, especially as wood preservatives. With the rapid development of organic synthesis chemistry in the 20th century and the development of highly effective warfare agents, pesticides and fungicides, these substances were replaced by chlorogenic compounds (e.g. γ-hexachlorocyclohexane (lindane), dichlorodiphenyltrichloroethane (DDT), pentachlorophenol (PCP), hormone-like derivatives such as synthetic pyrethroids (e.g., permethrin, deltamethrin, cyfluthrin) and phosphoric acid esters (e.g., dichlorvos, chlorpyrifos). Today we know that textile artifacts (costumes, uniforms, carpets, tapestries), wooden objects, herbaria, libraries, archives and historical wall decorations made of fabric, paper and leather were also widely treated with toxic inorganic and organic substances. The migration (emission) of pollutants from the contaminated objects leads to continuous (secondary) contamination and accumulation in the indoor air and dust. It is important to note that many of mentioned toxic substances are also material-damaging; they cause discoloration and corrosion. Some, such as DDT, form crystals, which in turn can cause micro tectonic, destructive shifting, for example, in paint layers. Museums must integrate sustainable solutions to address the residual biocide problems already in the planning phase. Gas and dust phase measurements and analysis must become standard as well as methods of decontamination.

Keywords: biocides, decontamination, museum collections, toxic substances in museums

Procedia PDF Downloads 87
73 Construction Engineering and Cocoa Agriculture: A Synergistic Approach for Improved Livelihoods of Farmers

Authors: Felix Darko-Amoah, Daniel Acquah

Abstract:

In contemporary ecosystems for developing countries like Ghana, the need to explore innovative solutions for sustainable livelihoods of farmers is more important than ever. With Ghana’s population growing steadily and the demand for food, fiber and shelter increasing, it is imperative that the construction industry and agriculture come together to address the challenges faced by farmers in the country. In order to enhance the livelihoods of cocoa farmers in Ghana, this paper provides an innovative strategy that aims to integrate the areas of civil engineering and cash crop agriculture. This study focuses on cocoa cultivation in poorer nations, where farmers confront a variety of difficulties include restricted access to financing, subpar infrastructure, and insufficient support services. We seek to improve farmers' access to financing, improve infrastructure, and provide support services that are essential to their success by combining the fields of building engineering and cocoa production. The findings of the study are beneficial to cocoa producers, community extension agents, and construction engineers. In order to accomplish our objectives, we conducted 307 of field investigations in particular cocoa growing communities in the Western Region of Ghana. Several studies have shown that there is a lack of adequate infrastructure and financing, leading to low yields, subpar beans, and low farmer profitability in developing nations like Ghana. Our goal is to give farmers access to better infrastructure, better financing, and support services that are crucial to their success through the fusion of construction engineering and cocoa production. Based on data gathered from the field investigations, the results show that the employment of appropriate technology and methods for developing structures, roads, and other infrastructure in rural regions is one of the essential components of this strategy. For instance, we find that using affordable, environmentally friendly materials like bamboo, rammed earth, and mud bricks can assist to cut expenditures while also protecting the environment. By applying simple relational techniques to the data gathered, the results also show that construction engineers are crucial in planning and building infrastructure that is appropriate for the local environment and circumstances and resilient to natural disasters like floods. Thus, the convergence of construction engineering and cash crop cultivation is another crucial component of the agriculture-construction interplay. For instance, farmers can receive financial assistance to buy essential inputs, such as seeds, fertilizer, and tools, as well as training in proper farming methods. Moreover, extension services can be offered to assist farmers in marketing their crops and enhancing their livelihoods and revenue. In conclusion, our analysis of responses from the 307 participants depicts that the combination of construction engineering and cash crop agriculture offers an innovative approach to improving farmers' livelihoods in cocoa farming communities in Ghana. In conclusion, by inculcating the findings of this study into core decision-making, policymakers can help farmers build sustainable and profitable livelihoods by addressing challenges such as limited access to financing, poor infrastructure, and inadequate support services.

Keywords: cocoa agriculture, construction engineering, farm buildings and equipment, improved livelihoods of farmers

Procedia PDF Downloads 67
72 From Avatars to Humans: A Hybrid World Theory and Human Computer Interaction Experimentations with Virtual Reality Technologies

Authors: Juan Pablo Bertuzzi, Mauro Chiarella

Abstract:

Employing a communication studies perspective and a socio-technological approach, this paper introduces a theoretical framework for understanding the concept of hybrid world; the avatarization phenomena; and the communicational archetype of co-hybridization. This analysis intends to make a contribution to future design of virtual reality experimental applications. Ultimately, this paper presents an ongoing research project that proposes the study of human-avatar interactions in digital educational environments, as well as an innovative reflection on inner digital communication. The aforementioned project presents the analysis of human-avatar interactions, through the development of an interactive experience in virtual reality. The goal is to generate an innovative communicational dimension that could reinforce the hypotheses presented throughout this paper. Being thought for its initial application in educational environments, the analysis and results of this research are dependent and have been prepared in regard of a meticulous planning of: the conception of a 3D digital platform; the interactive game objects; the AI or computer avatars; the human representation as hybrid avatars; and lastly, the potential of immersion, ergonomics and control diversity that can provide the virtual reality system and the game engine that were chosen. The project is divided in two main axes: The first part is the structural one, as it is mandatory for the construction of an original prototype. The 3D model is inspired by the physical space that belongs to an academic institution. The incorporation of smart objects, avatars, game mechanics, game objects, and a dialogue system will be part of the prototype. These elements have all the objective of gamifying the educational environment. To generate a continuous participation and a large amount of interactions, the digital world will be navigable both, in a conventional device and in a virtual reality system. This decision is made, practically, to facilitate the communication between students and teachers; and strategically, because it will help to a faster population of the digital environment. The second part is concentrated to content production and further data analysis. The challenge is to offer a scenario’s diversity that compels users to interact and to question their digital embodiment. The multipath narrative content that is being applied is focused on the subjects covered in this paper. Furthermore, the experience with virtual reality devices proposes users to experiment in a mixture of a seemingly infinite digital world and a small physical area of movement. This combination will lead the narrative content and it will be crucial in order to restrict user’s interactions. The main point is to stimulate and to grow in the user the need of his hybrid avatar’s help. By building an inner communication between user’s physicality and user’s digital extension, the interactions will serve as a self-guide through the gameworld. This is the first attempt to make explicit the avatarization phenomena and to further analyze the communicational archetype of co-hybridization. The challenge of the upcoming years will be to take advantage from these forms of generalized avatarization, in order to create awareness and establish innovative forms of hybridization.

Keywords: avatar, hybrid worlds, socio-technology, virtual reality

Procedia PDF Downloads 115
71 Development of One-Pot Sequential Cyclizations and Photocatalyzed Decarboxylative Radical Cyclization: Application Towards Aspidospermatan Alkaloids

Authors: Guillaume Bélanger, Jean-Philippe Fontaine, Clémence Hauduc

Abstract:

There is an undeniable thirst from organic chemists and from the pharmaceutical industry to access complex alkaloids with short syntheses. While medicinal chemists are interested in the fascinating wide range of biological properties of alkaloids, synthetic chemists are rather interested in finding new routes to access these challenging natural products of often low availability from nature. To synthesize complex polycyclic cores of natural products, reaction cascades or sequences performed one-pot offer a neat advantage over classical methods for their rapid increase in molecular complexity in a single operation. In counterpart, reaction cascades need to be run on substrates bearing all the required functional groups necessary for the key cyclizations. Chemoselectivity is thus a major issue associated with such a strategy, in addition to diastereocontrol and regiocontrol for the overall transformation. In the pursuit of synthetic efficiency, our research group developed an innovative one-pot transformation of linear substrates into bi- and tricyclic adducts applied to the construction of Aspidospermatan-type alkaloids. The latter is a rich class of indole alkaloids bearing a unique bridged azatricyclic core. Despite many efforts toward the synthesis of members of this family, efficient and versatile synthetic routes are still coveted. Indeed, very short, non-racemic approaches are rather scarce: for example, in the cases of aspidospermidine and aspidospermine, syntheses are all fifteen steps and over. We envisaged a unified approach to access several members of the Aspidospermatan alkaloids family. The key sequence features a highly chemoselective formamide activation that triggers a Vilsmeier-Haack cyclization, followed by an azomethine ylide generation and intramolecular cycloaddition. Despite the high density and variety of functional groups on the substrates (electron-rich and electron-poor alkenes, nitrile, amide, ester, enol ether), the sequence generated three new carbon-carbon bonds and three rings in a single operation with good yield and high chemoselectivity. A detailed study of amide, nucleophile, and dipolarophile variations to finally get to the successful combination required for the key transformation will be presented. To complete the indoline fragment of the natural products, we developed an original approach. Indeed, all reported routes to Aspidospermatan alkaloids introduce the indoline or indole early in the synthesis. In our work, the indoline needs to be installed on the azatricyclic core after the key cyclization sequence. As a result, typical Fischer indolization is not suited since this reaction is known to fail on such substrates. We thus envisaged a unique photocatalyzed decarboxylative radical cyclization. The development of this reaction as well as the scope and limitations of the methodology, will also be presented. The original Vilsmeier-Haack and azomethine ylide cyclization sequence as well as the new photocatalyzed decarboxylative radical cyclization will undoubtedly open access to new routes toward polycyclic indole alkaloids and derivatives of pharmaceutical interest in general.

Keywords: Aspidospermatan alkaloids, azomethine ylide cycloaddition, decarboxylative radical cyclization, indole and indoline synthesis, one-pot sequential cyclizations, photocatalysis, Vilsmeier-Haack Cyclization

Procedia PDF Downloads 57
70 Pre-conditioning and Hot Water Sanitization of Reverse Osmosis Membrane for Medical Water Production

Authors: Supriyo Das, Elbir Jove, Ajay Singh, Sophie Corbet, Noel Carr, Martin Deetz

Abstract:

Water is a critical commodity in the healthcare and medical field. The utility of medical-grade water spans from washing surgical equipment, drug preparation to the key element of life-saving therapy such as hydrotherapy and hemodialysis for patients. A properly treated medical water reduces the bioburden load and mitigates the risk of infection, ensuring patient safety. However, any compromised condition during the production of medical-grade water can create a favorable environment for microbial growth putting patient safety at high risk. Therefore, proper upstream treatment of the medical water is essential before its application in healthcare, pharma and medical space. Reverse Osmosis (RO) is one of the most preferred treatments within healthcare industries and is recommended by all International Pharmacopeias to achieve the quality level demanded by global regulatory bodies. The RO process can remove up to 99.5% of constituents from feed water sources, eliminating bacteria, proteins and particles sizes of 100 Dalton and above. The combination of RO with other downstream water treatment technologies such as Electrodeionization and Ultrafiltration meet the quality requirements of various pharmacopeia monographs to produce highly purified water or water for injection for medical use. In the reverse osmosis process, the water from a liquid with a high concentration of dissolved solids is forced to flow through an especially engineered semi-permeable membrane to the low concentration side, resulting in high-quality grade water. However, these specially engineered RO membranes need to be sanitized either chemically or at high temperatures at regular intervals to keep the bio-burden at the minimum required level. In this paper, we talk about Dupont´s FilmTec Heat Sanitizable Reverse Osmosis membrane (HSRO) for the production of medical-grade water. An HSRO element must be pre-conditioned prior to initial use by exposure to hot water (80°C-85°C) for its stable performance and to meet the manufacturer’s specifications. Without pre-conditioning, the membrane will show variations in feed pressure operations and salt rejection. The paper will discuss the critical variables of pre-conditioning steps that can affect the overall performance of the HSRO membrane and demonstrate the data to support the need for pre-conditioning of HSRO elements. Our preliminary data suggests that there can be up to 35 % reduction in flow due to initial heat treatment, which also positively affects the increase in salt rejection. The paper will go into detail about the fundamental understanding of the performance change of HSRO after the pre-conditioning step and its effect on the quality of medical water produced. The paper will also discuss another critical point, “regular hot water sanitization” of these HSRO membranes. Regular hot water sanitization (at 80°C-85°C) is necessary to keep the membrane bioburden free; however, it can negatively impact the performance of the membrane over time. We will demonstrate several data points on hot water sanitization using FilmTec HSRO elements and challenge its robustness to produce quality medical water. The last part of this paper will discuss the construction details of the FilmTec HSRO membrane and features that make it suitable to pre-condition and sanitize at high temperatures.

Keywords: heat sanitizable reverse osmosis, HSRO, medical water, hemodialysis water, water for Injection, pre-conditioning, heat sanitization

Procedia PDF Downloads 184
69 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 142
68 Predicting and Obtaining New Solvates of Curcumin, Demethoxycurcumin and Bisdemethoxycurcumin Based on the Ccdc Statistical Tools and Hansen Solubility Parameters

Authors: J. Ticona Chambi, E. A. De Almeida, C. A. Andrade Raymundo Gaiotto, A. M. Do Espírito Santo, L. Infantes, S. L. Cuffini

Abstract:

The solubility of active pharmaceutical ingredients (APIs) is challenging for the pharmaceutical industry. The new multicomponent crystalline forms as cocrystal and solvates present an opportunity to improve the solubility of APIs. Commonly, the procedure to obtain multicomponent crystalline forms of a drug starts by screening the drug molecule with the different coformers/solvents. However, it is necessary to develop methods to obtain multicomponent forms in an efficient way and with the least possible environmental impact. The Hansen Solubility Parameters (HSPs) is considered a tool to obtain theoretical knowledge of the solubility of the target compound in the chosen solvent. H-Bond Propensity (HBP), Molecular Complementarity (MC), Coordination Values (CV) are tools used for statistical prediction of cocrystals developed by the Cambridge Crystallographic Data Center (CCDC). The HSPs and the CCDC tools are based on inter- and intra-molecular interactions. The curcumin (Cur), target molecule, is commonly used as an anti‐inflammatory. The demethoxycurcumin (Demcur) and bisdemethoxycurcumin (Bisdcur) are natural analogues of Cur from turmeric. Those target molecules have differences in their solubilities. In this way, the work aimed to analyze and compare different tools for multicomponent forms prediction (solvates) of Cur, Demcur and Biscur. The HSP values were calculated for Cur, Demcur, and Biscur using the chemical group contribution methods and the statistical optimization from experimental data. The HSPmol software was used. From the HSPs of the target molecules and fifty solvents (listed in the HSP books), the relative energy difference (RED) was determined. The probability of the target molecules would be interacting with the solvent molecule was determined using the CCDC tools. A dataset of fifty molecules of different organic solvents was ranked for each prediction method and by a consensus ranking of different combinations: HSP, CV, HBP and MC values. Based on the prediction, 15 solvents were selected as Dimethyl Sulfoxide (DMSO), Tetrahydrofuran (THF), Acetonitrile (ACN), 1,4-Dioxane (DOX) and others. In a starting analysis, the slow evaporation technique from 50°C at room temperature and 4°C was used to obtain solvates. The single crystals were collected by using a Bruker D8 Venture diffractometer, detector Photon100. The data processing and crystal structure determination were performed using APEX3 and Olex2-1.5 software. According to the results, the HSPs (theoretical and optimized) and the Hansen solubility sphere for Cur, Demcur and Biscur were obtained. With respect to prediction analyses, a way to evaluate the predicting method was through the ranking and the consensus ranking position of solvates already reported in the literature. It was observed that the combination of HSP-CV obtained the best results when compared to the other methods. Furthermore, as a result of solvent selected, six new solvates, Cur-DOX, Cur-DMSO, Bicur-DOX, Bircur-THF, Demcur-DOX, Demcur-ACN and a new Biscur hydrate, were obtained. Crystal structures were determined for Cur-DOX, Biscur-DOX, Demcur-DOX and Bicur-Water. Moreover, the unit-cell parameter information for Cur-DMSO, Biscur-THF and Demcur-ACN were obtained. The preliminary results showed that the prediction method is showing a promising strategy to evaluate the possibility of forming multicomponent. It is currently working on obtaining multicomponent single crystals.

Keywords: curcumin, HSPs, prediction, solvates, solubility

Procedia PDF Downloads 39
67 Differential Expression Analysis of Busseola fusca Larval Transcriptome in Response to Cry1Ab Toxin Challenge

Authors: Bianca Peterson, Tomasz J. Sańko, Carlos C. Bezuidenhout, Johnnie Van Den Berg

Abstract:

Busseola fusca (Fuller) (Lepidoptera: Noctuidae), the maize stem borer, is a major pest in sub-Saharan Africa. It causes economic damage to maize and sorghum crops and has evolved non-recessive resistance to genetically modified (GM) maize expressing the Cry1Ab insecticidal toxin. Since B. fusca is a non-model organism, very little genomic information is publicly available, and is limited to some cytochrome c oxidase I, cytochrome b, and microsatellite data. The biology of B. fusca is well-described, but still poorly understood. This, in combination with its larval-specific behavior, may pose problems for limiting the spread of current resistant B. fusca populations or preventing resistance evolution in other susceptible populations. As part of on-going research into resistance evolution, B. fusca larvae were collected from Bt and non-Bt maize in South Africa, followed by RNA isolation (15 specimens) and sequencing on the Illumina HiSeq 2500 platform. Quality of reads was assessed with FastQC, after which Trimmomatic was used to trim adapters and remove low quality, short reads. Trinity was used for the de novo assembly, whereas TransRate was used for assembly quality assessment. Transcript identification employed BLAST (BLASTn, BLASTp, and tBLASTx comparisons), for which two libraries (nucleotide and protein) were created from 3.27 million lepidopteran sequences. Several transcripts that have previously been implicated in Cry toxin resistance was identified for B. fusca. These included aminopeptidase N, cadherin, alkaline phosphatase, ATP-binding cassette transporter proteins, and mitogen-activated protein kinase. MEGA7 was used to align these transcripts to reference sequences from Lepidoptera to detect mutations that might potentially be contributing to Cry toxin resistance in this pest. RSEM and Bioconductor were used to perform differential gene expression analysis on groups of B. fusca larvae challenged and unchallenged with the Cry1Ab toxin. Pairwise expression comparisons of transcripts that were at least 16-fold expressed at a false-discovery corrected statistical significance (p) ≤ 0.001 were extracted and visualized in a hierarchically clustered heatmap using R. A total of 329,194 transcripts with an N50 of 1,019 bp were generated from the over 167.5 million high-quality paired-end reads. Furthermore, 110 transcripts were over 10 kbp long, of which the largest one was 29,395 bp. BLAST comparisons resulted in identification of 157,099 (47.72%) transcripts, among which only 3,718 (2.37%) were identified as Cry toxin receptors from lepidopteran insects. According to transcript expression profiles, transcripts were grouped into three subclusters according to the similarity of their expression patterns. Several immune-related transcripts (pathogen recognition receptors, antimicrobial peptides, and inhibitors) were up-regulated in the larvae feeding on Bt maize, indicating an enhanced immune status in response to toxin exposure. Above all, extremely up-regulated arylphorin genes suggest that enhanced epithelial healing is one of the resistance mechanisms employed by B. fusca larvae against the Cry1Ab toxin. This study is the first to provide a resource base and some insights into a potential mechanism of Cry1Ab toxin resistance in B. fusca. Transcriptomic data generated in this study allows identification of genes that can be targeted by biotechnological improvements of GM crops.

Keywords: epithelial healing, Lepidoptera, resistance, transcriptome

Procedia PDF Downloads 171
66 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components

Authors: Francesca Gullo, Paola Palmero, Massimo Messori

Abstract:

Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.

Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites

Procedia PDF Downloads 24
65 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems

Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele

Abstract:

The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.

Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab

Procedia PDF Downloads 156
64 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring

Authors: Katerina Krizova, Inigo Molina

Abstract:

The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.

Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content

Procedia PDF Downloads 97
63 Coil-Over Shock Absorbers Compared to Inherent Material Damping

Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major

Abstract:

Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.

Keywords: damper structures, material damping, PDMS, TPU

Procedia PDF Downloads 94
62 The 5-HT1A Receptor Biased Agonists, NLX-101 and NLX-204, Elicit Rapid-Acting Antidepressant Activity in Rat Similar to Ketamine and via GABAergic Mechanisms

Authors: A. Newman-Tancredi, R. Depoortère, P. Gruca, E. Litwa, M. Lason, M. Papp

Abstract:

The N-methyl-D-aspartic acid (NMDA) receptor antagonist, ketamine, can elicit rapid-acting antidepressant (RAAD) effects in treatment-resistant patients, but it requires parenteral co-administration with a classical antidepressant under medical supervision. In addition, ketamine can also produce serious side effects that limit its long-term use, and there is much interest in identifying RAADs based on ketamine’s mechanism of action but with safer profiles. Ketamine elicits GABAergic interneuron inhibition, glutamatergic neuron stimulation, and, notably, activation of serotonin 5-HT1A receptors in the prefrontal cortex (PFC). Direct activation of the latter receptor subpopulation with selective ‘biased agonists’ may therefore be a promising strategy to identify novel RAADs and, consistent with this hypothesis, the prototypical cortical biased agonist, NLX-101, exhibited robust RAAD-like activity in the chronic mild stress model of depression (CMS). The present study compared the effects of a novel, selective 5-HT1A receptor-biased agonist, NLX-204, with those of ketamine and NLX-101. Materials and methods: CMS procedure was conducted on Wistar rats; drugs were administered either intraperitoneally (i.p.) or by bilateral intracortical microinjection. Ketamine: 10 mg/kg i.p. or 10 µg/side in PFC; NLX-204 and NLX-101: 0.08 and 0.16 mg/kg i.p. or 16 µg/side in PFC. In addition, interaction studies were carried out with systemic NLX-204 or NLX-101 (each at 0.16 mg/kg i.p.) in combination with intracortical WAY-100635 (selective 5-HT1A receptor antagonist; 2 µg/side) or muscimol (GABA-A receptor agonist, 12.5 ng/side). Anhedonia was assessed by CMS-induced decrease in sucrose solution consumption; anxiety-like behavior was assessed using the Elevated Plus Maze (EPM), and cognitive impairment was assessed by the Novel Object Recognition (NOR) test. Results: A single administration of NLX-204 was sufficient to reverse the CMS-induced deficit in sucrose consumption, similarly to ketamine and NLX-101. NLX-204 also reduced CMS-induced anxiety in the EPM and abolished CMS-induced NOR deficits. These effects were maintained (EPM and NOR) or enhanced (sucrose consumption) over a subsequent 2-week period of treatment. The anti-anhedonic response of the drugs was also maintained for several weeks Following treatment discontinuation, suggesting that they had sustained effects on neuronal networks. A single PFC administration of NLX-204 reversed deficient sucrose consumption, similarly to ketamine and NLX-101. Moreover, the anti-anhedonic activities of systemic NLX-204 and NLX 101 were abolished by coadministration with intracortical WAY-100635 or muscimol. Conclusions: (i) The antidepressant-like activity of NLX-204 in the rat CMS model was as rapid as that of ketamine or NLX-101, supporting targeting cortical 5-HT1A receptors with selective, biased agonists to achieve RAAD effects. (ii)The anti-anhedonic activity of systemic NLX-204 was mimicked by local administration of the compound in the PFC, confirming the involvement of cortical circuits in its RAAD-like effects. (iii) Notably, the effects of systemic NLX-204 and NLX-101 were abolished by PFC administration of muscimol, indicating that they act by (indirectly) eliciting a reduction in cortical GABAergic neurotransmission. This is consistent with ketamine’s mechanism of action and suggests that there are converging NMDA and 5-HT1A receptor signaling cascades in PFC underlying the RAAD-like activities of ketamine and NLX-204. Acknowledgements: The study was financially supported by NCN grant no. 2019/35/B/NZ7/00787.

Keywords: depression, ketamine, serotonin, 5-HT1A receptor, chronic mild stress

Procedia PDF Downloads 79
61 Gas-Phase Noncovalent Functionalization of Pristine Single-Walled Carbon Nanotubes with 3D Metal(II) Phthalocyanines

Authors: Vladimir A. Basiuk, Laura J. Flores-Sanchez, Victor Meza-Laguna, Jose O. Flores-Flores, Lauro Bucio-Galindo, Elena V. Basiuk

Abstract:

Noncovalent nanohybrid materials combining carbon nanotubes (CNTs) with phthalocyanines (Pcs) is a subject of increasing research effort, with a particular emphasis on the design of new heterogeneous catalysts, efficient organic photovoltaic cells, lithium batteries, gas sensors, field effect transistors, among other possible applications. The possibility of using unsubstituted Pcs for CNT functionalization is very attractive due to their very moderate cost and easy commercial availability. However, unfortunately, the deposition of unsubstituted Pcs onto nanotube sidewalls through the traditional liquid-phase protocols turns to be very problematic due to extremely poor solubility of Pcs. On the other hand, unsubstituted free-base H₂Pc phthalocyanine ligand, as well as many of its transition metal complexes, exhibit very high thermal stability and considerable volatility under reduced pressure, which opens the possibility for their physical vapor deposition onto solid surfaces, including nanotube sidewalls. In the present work, we show the possibility of simple, fast and efficient noncovalent functionalization of single-walled carbon nanotubes (SWNTs) with a series of 3d metal(II) phthalocyanines Me(II)Pc, where Me= Co, Ni, Cu, and Zn. The functionalization can be performed in a temperature range of 400-500 °C under moderate vacuum and requires about 2-3 h only. The functionalized materials obtained were characterized by means of Fourier-transform infrared (FTIR), Raman, UV-visible and energy-dispersive X-ray spectroscopy (EDS), scanning and transmission electron microscopy (SEM and TEM, respectively) and thermogravimetric analysis (TGA). TGA suggested that Me(II)Pc weight content is 30%, 17% and 35% for NiPc, CuPc, and ZnPc, respectively (CoPc exhibited anomalous thermal decomposition behavior). The above values are consistent with those estimated from EDS spectra, namely, of 24-39%, 27-36% and 27-44% for CoPc, CuPc, and ZnPc, respectively. A strong increase in intensity of D band in the Raman spectra of SWNT‒Me(II)Pc hybrids, as compared to that of pristine nanotubes, implies very strong interactions between Pc molecules and SWNT sidewalls. Very high absolute values of binding energies of 32.46-37.12 kcal/mol and the highest occupied and lowest unoccupied molecular orbital (HOMO and LUMO, respectively) distribution patterns, calculated with density functional theory by using Perdew-Burke-Ernzerhof general gradient approximation correlation functional in combination with the Grimme’s empirical dispersion correction (PBE-D) and the double numerical basis set (DNP), also suggested that the interactions between Me(II) phthalocyanines and nanotube sidewalls are very strong. The authors thank the National Autonomous University of Mexico (grant DGAPA-IN200516) and the National Council of Science and Technology of Mexico (CONACYT, grant 250655) for financial support. The authors are also grateful to Dr. Natalia Alzate-Carvajal (CCADET of UNAM), Eréndira Martínez (IF of UNAM) and Iván Puente-Lee (Faculty of Chemistry of UNAM) for technical assistance with FTIR, TGA measurements, and TEM imaging, respectively.

Keywords: carbon nanotubes, functionalization, gas-phase, metal(II) phthalocyanines

Procedia PDF Downloads 101
60 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS

Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert

Abstract:

The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.

Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF

Procedia PDF Downloads 135
59 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 109
58 Identification of Tangible and Intangible Heritage and Preparation of Conservation Proposal for the Historic City of Karanja Laad

Authors: Prachi Buche Marathe

Abstract:

Karanja Laad is a city located in the Vidarbha region in the state of Maharashtra, India. It has a huge amount of tangible and intangible heritage in the form of monuments, precincts, a group of structures, festivals and procession route, which is neglected and lost with time. Three different religions Hinduism, Islam and Jainism along with associations of being a birthplace of Swami Nrusinha Saraswati, an exponent of Datta Sampradaya sect and the British colonial layer have shaped the culture and society of the place over the period. The architecture of the town Karanja Laad has enhanced its unique historic and cultural value with a combination of all these historic layers. Karanja Laad is also a traditional trading historic town with unique hybrid architectural style and has a good potential for developing as a tourist place along with the present image of a pilgrim destination of Datta Sampradaya. The aim of the research is to prepare a conservation proposal for the historic town along with the management framework. Objectives of the research are to study the evolution of Karanja town, to identify the cultural resources along with issues of the historic core of the city, to understand Datta sampradaya, and contribution of Saint Nrusinha Saraswati in the religious sect and his association as an important personality with Karanja. The methodology of the research is site visits to the Karanja city, making field surveys for documentation and discussions and questionnaires with the residents to establish heritage and identify potential and issues within the historic core thereby establishing a case for conservation. Field surveys are conducted for town level study of land use, open spaces, occupancy, ownership, traditional commodity and community, infrastructure, streetscapes, and precinct activities during the festival and non-festival period. Building level study includes establishing various typologies like residential, institutional commercial, religious, and traditional infrastructure from the mythological references like waterbodies (kund), lake and wells. One of the main issues is that the loss of the traditional footprint as well as the traditional open spaces which are getting lost due to the new illegal encroachments and lack of guidelines for the new additions to conserve the original fabric of the structures. Traditional commodities are getting lost since there is no promotion of these skills like pottery and painting. Lavish bungalows like Kannava mansion, main temple Wada (birthplace of the saint) have a huge potential to be developed as a museum by adaptive re-use which will, in turn, attract many visitors during festivals which will boost the economy. Festival procession routes can be identified and a heritage walk can be developed so as to highlight the traditional features of the town. Overall study has resulted in establishing a heritage map with 137 heritage structures identified as potential. Conservation proposal is worked out on the town level, precinct level and building level with interventions such as developing construction guidelines for further development and establishing a heritage cell consisting architects and engineers for the upliftment of the existing rich heritage of the Karanja city.

Keywords: built heritage, conservation, Datta Sampradaya, Karanja Laad, Swami Nrusinha Saraswati, procession route

Procedia PDF Downloads 133
57 Bio-Inspired Information Complexity Management: From Ant Colony to Construction Firm

Authors: Hamza Saeed, Khurram Iqbal Ahmad Khan

Abstract:

Effective information management is crucial for any construction project and its success. Primary areas of information generation are either the construction site or the design office. There are different types of information required at different stages of construction involving various stakeholders creating complexity. There is a need for effective management of information flows to reduce uncertainty creating complexity. Nature provides a unique perspective in terms of dealing with complexity, in particular, information complexity. System dynamics methodology provides tools and techniques to address complexity. It involves modeling and simulation techniques that help address complexity. Nature has been dealing with complex systems since its creation 4.5 billion years ago. It has perfected its system by evolution, resilience towards sudden changes, and extinction of unadaptable and outdated species that are no longer fit for the environment. Nature has been accommodating the changing factors and handling complexity forever. Humans have started to look at their natural counterparts for inspiration and solutions for their problems. This brings forth the possibility of using a biomimetics approach to improve the management practices used in the construction sector. Ants inhabit different habitats. Cataglyphis and Pogonomyrmex live in deserts, Leafcutter ants reside in rainforests, and Pharaoh ants are native to urban developments of tropical areas. Detailed studies have been done on fifty species out of fourteen thousand discovered. They provide the opportunity to study the interactions in diverse environments to generate collective behavior. Animals evolve to better adapt to their environment. The collective behavior of ants emerges from feedback through interactions among individuals, based on a combination of three basic factors: The patchiness of resources in time and space, operating cost, environmental stability, and the threat of rupture. If resources appear in patches through time and space, the response is accelerating and non-linear, and if resources are scattered, the response follows a linear pattern. If the acquisition of energy through food is faster than energy spent to get it, the default is to continue with an activity unless it is halted for some reason. If the energy spent is rather higher than getting it, the default changes to stay put unless activated. Finally, if the environment is stable and the threat of rupture is low, the activation and amplification rate is slow but steady. Otherwise, it is fast and sporadic. To further study the effects and to eliminate the environmental bias, the behavior of four different ant species were studied, namely Red Harvester ants (Pogonomyrmex Barbatus), Argentine ants (Linepithema Humile), Turtle ants (Cephalotes Goniodontus), Leafcutter ants (Genus: Atta). This study aims to improve the information system in the construction sector by providing a guideline inspired by nature with a systems-thinking approach, using system dynamics as a tool. Identified factors and their interdependencies were analyzed in the form of a causal loop diagram (CLD), and construction industry professionals were interviewed based on the developed CLD, which was validated with significance response. These factors and interdependencies in the natural system corresponds with the man-made systems, providing a guideline for effective use and flow of information.

Keywords: biomimetics, complex systems, construction management, information management, system dynamics

Procedia PDF Downloads 114
56 An Initial Assessment of the Potential Contibution of 'Community Empowerment' to Mitigating the Drivers of Deforestation and Forest Degradation, in Giam Siak Kecil-Bukit Batu Biosphere Reserve

Authors: Arzyana Sunkar, Yanto Santosa, Siti Badriyah Rushayati

Abstract:

Indonesia has experienced annual forest fires that have rapidly destroyed and degraded its forests. Fires in the peat swamp forests of Riau Province, have set the stage for problems to worsen, this being the ecosystem most prone to fires (which are also the most difficult, to extinguish). Despite various efforts to curb deforestation, and forest degradation processes, severe forest fires are still occurring. To find an effective solution, the basic causes of the problems must be identified. It is therefore critical to have an in-depth understanding of the underlying causal factors that have contributed to deforestation and forest degradation as a whole, in order to attain reductions in their rates. An assessment of the drivers of deforestation and forest degradation was carried out, in order to design and implement measures that could slow these destructive processes. Research was conducted in Giam Siak Kecil–Bukit Batu Biosphere Reserve (GSKBB BR), in the Riau Province of Sumatera, Indonesia. A biosphere reserve was selected as the study site because such reserves aim to reconcile conservation with sustainable development. A biosphere reserve should promote a range of local human activities, together with development values that are in line spatially and economically with the area conservation values, through use of a zoning system. Moreover, GSKBB BR is an area with vast peatlands, and is experiencing forest fires annually. Various factors were analysed to assess the drivers of deforestation and forest degradation in GSKBB BR; data were collected from focus group discussions with stakeholders, key informant interviews with key stakeholders, field observation and a literature review. Landsat satellite imagery was used to map forest-cover changes for various periods. Analysis of landsat images, taken during the period 2010-2014, revealed that within the non-protected area of core zone, there was a trend towards decreasing peat swamp forest areas, increasing land clearance, and increasing areas of community oil-palm and rubber plantations. Fire was used for land clearing and most of the forest fires occurred in the most populous area (the transition area). The study found a relationship between the deforested/ degraded areas, and certain distance variables, i.e. distance from roads, villages and the borders between the core area and the buffer zone. The further the distance from the core area of the reserve, the higher was the degree of deforestation and forest degradation. Research findings suggested that agricultural expansion may be the direct cause of deforestation and forest degradation in the reserve, whereas socio-economic factors were the underlying driver of forest cover changes; such factors consisting of a combination of socio-cultural, infrastructural, technological, institutional (policy and governance), demographic (population pressure) and economic (market demand) considerations. These findings indicated that local factors/problems were the critical causes of deforestation and degradation in GSKBB BR. This research therefore concluded that reductions in deforestation and forest degradation in GSKBB BR could be achieved through ‘local actor’-tailored approaches such as community empowerment

Keywords: Actor-led solution, community empowerment, drivers of deforestation and forest degradation, Giam Siak Kecil – Bukit Batu Biosphere Reserve

Procedia PDF Downloads 330