Search results for: system of system decision making
542 Surface Plasmon Resonance Imaging-Based Epigenetic Assay for Blood DNA Post-Traumatic Stress Disorder Biomarkers
Authors: Judy M. Obliosca, Olivia Vest, Sandra Poulos, Kelsi Smith, Tammy Ferguson, Abigail Powers Lott, Alicia K. Smith, Yang Xu, Christopher K. Tison
Abstract:
Post-Traumatic Stress Disorder (PTSD) is a mental health problem that people may develop after experiencing traumatic events such as combat, natural disasters, and major emotional challenges. Tragically, the number of military personnel with PTSD correlates directly with the number of veterans who attempt suicide, with the highest rate in the Army. Research has shown epigenetic risks in those who are prone to several psychiatric dysfunctions, particularly PTSD. Once initiated in response to trauma, epigenetic alterations in particular, the DNA methylation in the form of 5-methylcytosine (5mC) alters chromatin structure and represses gene expression. Current methods to detect DNA methylation, such as bisulfite-based genomic sequencing techniques, are laborious and have massive analysis workflow while still having high error rates. A faster and simpler detection method of high sensitivity and precision would be useful in a clinical setting to confirm potential PTSD etiologies, prevent other psychiatric disorders, and improve military health. A nano-enhanced Surface Plasmon Resonance imaging (SPRi)-based assay that simultaneously detects site-specific 5mC base (termed as PTSD base) in methylated genes related to PTSD is being developed. The arrays on a sensing chip were first constructed for parallel detection of PTSD bases using synthetic and genomic DNA (gDNA) samples. For the gDNA sample extracted from the whole blood of a PTSD patient, the sample was first digested using specific restriction enzymes, and fragments were denatured to obtain single-stranded methylated target genes (ssDNA). The resulting mixture of ssDNA was then injected into the assay platform, where targets were captured by specific DNA aptamer probes previously immobilized on the surface of a sensing chip. The PTSD bases in targets were detected by anti-5-methylcytosine antibody (anti-5mC), and the resulting signals were then enhanced by the universal nanoenhancer. Preliminary results showed successful detection of a PTSD base in a gDNA sample. Brighter spot images and higher delta values (control-subtracted reflectivity signal) relative to those of the control were observed. We also implemented the in-house surface activation system for detection and developed SPRi disposable chips. Multiplexed PTSD base detection of target methylated genes in blood DNA from PTSD patients of severity conditions (asymptomatic and severe) was conducted. This diagnostic capability being developed is a platform technology, and upon successful implementation for PTSD, it could be reconfigured for the study of a wide variety of neurological disorders such as traumatic brain injury, Alzheimer’s disease, schizophrenia, and Huntington's disease and can be extended to the analyses of other sample matrices such as urine and saliva.Keywords: epigenetic assay, DNA methylation, PTSD, whole blood, multiplexing
Procedia PDF Downloads 125541 Monte Carlo Risk Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gases, power plants
Procedia PDF Downloads 472540 [Keynote Talk]: New Generations and Employment: An Exploratory Study about Tensions between the Psycho-Social Characteristics of the Generation Z and Expectations and Actions of Organizational Structures Related with Employment (CABA, 2016)
Authors: Esteban Maioli
Abstract:
Generational studies have an important research tradition in social and human sciences. On the one hand, the speed of social change in the context of globalization imposes the need to research the transformations are identified both the subjectivity of the agents involved and its inclusion in the institutional matrix, specifically employment. Generation Z, (generally considered as the population group whose birth occurs after 1995) have unique psycho-social characteristics. Gen Z is characterized by a different set of values, beliefs, attitudes and ambitions that impact in their concrete action in organizational structures. On the other hand, managers often have to deal with generational differences in the workplace. Organizations have members who belong to different generations; they had never before faced the challenge of having such a diverse group of members. The members of each historical generation are characterized by a different set of values, beliefs, attitudes and ambitions that are manifest in their concrete action in organizational structures. Gen Z it’s the only one who can fully be considered "global," while its members were born in the consolidated context of globalization. Some salient features of the Generation Z can be summarized as follows. They’re the first fully born into a digital world. Social networks and technology are integrated into their lives. They are concerned about the challenges of the modern world (poverty, inequality, climate change, among others). They are self-expressive, more liberal and open to change. They often bore easily, with short attention spans. They do not like routine tasks. They want to achieve a good life-work balance, and they are interested in a flexible work environment, as opposed to traditional work schedule. They are critical thinkers, who come with innovative and creative ideas to help. Research design considered methodological triangulation. Data was collected with two techniques: a self-administered survey with multiple choice questions and attitudinal scales applied over a non-probabilistic sample by reasoned decision. According to the multi-method strategy, also it was conducted in-depth interviews. Organizations constantly face new challenges. One of the biggest ones is to learn to manage a multi-generational scope of work. While Gen Z has not yet been fully incorporated (expected to do so in five years or so), many organizations have already begun to implement a series of changes in its recruitment and development. The main obstacle to retaining young talent is the gap between the expectations of iGen applicants and what companies offer. Members of the iGen expect not only a good salary and job stability but also a clear career plan. Generation Z needs to have immediate feedback on their tasks. However, many organizations have yet to improve both motivation and monitoring practices. It is essential for companies to take a review of organizational practices anchored in the culture of the organization.Keywords: employment, expectations, generation Z, organizational culture, organizations, psycho-social characteristics
Procedia PDF Downloads 201539 The Rehabilitation of The Covered Bridge Leclerc (P-00249) Passing Over the Bouchard Stream in LaSarre, Quebec
Authors: Nairy Kechichian
Abstract:
The original Leclerc Bridge is a covered wooden bridge that is considered a Quebec heritage structure with an index of 60, making it a very important provincial bridge from a historical point of view. It was constructed in 1927 and is in the rural area of Abitibi-Temiscamingue. It is a “town Québécois” type of structure, which is generally rare but common for covered bridges in Abitibi-Temiscamingue. This type of structure is composed of two trusses on both sides formed with diagonals, internal bracings, uprights and top and bottom chords to allow the transmission of loads. This structure is mostly known for its solidity, lightweightness, and ease of construction. It is a single-span bridge with a length of 25.3 meters and allows the passage of one vehicle at a time with a 4.22-meter driving lane. The structure is composed of 2 trusses located at each end of the deck, two gabion foundations at both ends, uprights and top and bottom chords. WSP (Williams Sale Partnership) Canada inc. was mandated by the Transport Minister of Quebec in 2019 to increase the capacity of the bridge from 5 tons to 30.6 tons and rehabilitate it, as it has deteriorated quite significantly over the years. The bridge was damaged due to material deterioration over time, exposure to humidity, high load effects and insect infestation. To allow the passage of 3 axle trucks, as well as to keep the integrity of this heritage structure, the final design chosen to rehabilitate the bridge involved adding a new deck independent from the roof structure of the bridge. Essentially, new steel beams support the deck loads and the desired vehicle loads. The roof of the bridge is linked to the steel deck for lateral support, but it is isolated from the wooden deck. The roof is preserved for aesthetic reasons and remains intact as it is a heritage piece. Due to strict traffic management obstacles, an efficient construction method was put into place, which consisted of building a temporary bridge and moving the existing roof onto it to allow the circulation of vehicles on one side of the temporary bridge while providing a working space for the repairs of the roof on the other side to take place simultaneously. In parallel, this method allowed the demolition and reconstruction of the existing foundation, building a new steel deck, and transporting back the roof on the new bridge. One of the main criteria for the rehabilitation of the wooden bridge was to preserve, as much as possible, the existing patrimonial architectural design of the bridge. The project was completed successfully by the end of 2021.Keywords: covered bridge, wood-steel, short span, town Québécois structure
Procedia PDF Downloads 67538 A Critical Evaluation of Occupational Safety and Health Management Systems' Implementation: Case of Mutare Urban Timber Processing Factories, Zimbabwe
Authors: Johanes Mandowa
Abstract:
The study evaluated the status of Occupational Safety and Health Management Systems’ (OSHMSs) implementation by Mutare urban timber processing factories. A descriptive cross sectional survey method was utilized in the study. Questionnaires, interviews and direct observations were the techniques employed to extract primary data from the respondents. Secondary data was acquired from OSH encyclopedia, OSH journals, newspaper articles, internet, past research papers, African Newsletter on OSH and NSSA On-guard magazines among others. Analysis of data collected was conducted using statistical and descriptive methods. Results revealed an unpleasant low uptake rate (16%) of OSH Management Systems by Mutare urban timber processing factories. On a comparative basis, low implementation levels were more pronounced in small timber processing factories than in large factories. The low uptake rate of OSH Management Systems revealed by the study validates the Government of Zimbabwe and its social partners’ observation that the dismal Zimbabwe OSH performance was largely due to non implementation of safety systems at most workplaces. The results exhibited a relationship between availability of a SHE practitioner in Mutare urban timber processing factories and OSHMS implementation. All respondents and interviewees’ agreed that OSH Management Systems are handy in curbing occupational injuries and diseases. It emerged from the study that the top barriers to implementation of safety systems are lack of adequate financial resources, lack of top management commitment and lack of OSHMS implementation expertise. Key motivators for OSHMSs establishment were cited as provision of adequate resources (76%), strong employee involvement (64%) and strong senior management commitment and involvement (60%). Study results demonstrated that both OSHMSs implementation barriers and motivators affect all Mutare urban timber processing factories irrespective of size. The study recommends enactment of a law by Ministry of Public Service, Labour and Social Welfare in consultation with NSSA to make availability of an OSHMS and qualified SHE practitioner mandatory at every workplace. More so, the enacted law should prescribe minimum educational qualification required for one to practice as a SHE practitioner. Ministry of Public Service, Labour and Social Welfare and NSSA should also devise incentives such as reduced WCIF premiums for good OSH performance to cushion Mutare urban timber processing factories from OSHMS implementation costs. The study recommends the incorporation of an OSH module in the academic curriculums of all programmes offered at tertiary institutions so as to ensure that graduates who later end up assuming influential management positions in Mutare urban timber processing factories are abreast with the necessity of OSHMSs in preventing occupational injuries and diseases. In the quest to further boost management’s awareness on the importance of OSHMSs, NSSA and SAZ are urged by the study to conduct OSHMSs awareness breakfast meetings targeting executive management on a periodic basis. The Government of Zimbabwe through the Ministry of Public Service, Labour and Social Welfare should also engage ILO Country Office for Zimbabwe to solicit for ILO’s technical assistance so as to enhance the effectiveness of NSSA’s and SAZ’s OSHMSs promotional programmes.Keywords: occupational safety health management system, national social security authority, standard association of Zimbabwe, Mutare urban timber processing factories, ministry of public service, labour and social welfare
Procedia PDF Downloads 337537 Quality in Healthcare: An Autism-Friendly Hospital Emergency Waiting Room
Authors: Elena Bellini, Daniele Mugnaini, Michele Boschetto
Abstract:
People with an Autistic Spectrum Disorder and an Intellectual Disability who need to attend a Hospital Emergency Waiting Room frequently present high levels of discomfort and challenging behaviors due to stress-related hyperarousal, sensory sensitivity, novelty-anxiety, communication and self-regulation difficulties. Increased agitation and acting out also disturb the diagnostic and therapeutic processes, and the emergency room climate. Architectural design disciplines aimed at reducing distress in hospitals or creating autism-friendly environments are called for to find effective answers to this particular need. A growing number of researchers are considering the physical environment as an important point of intervention for people with autism. It has been shown that providing the right setting can help enhance confidence and self-esteem and can have a profound impact on their health and wellbeing. Environmental psychology has evaluated the perceived quality of care, looking at the design of hospital rooms, paths and circulation, waiting rooms, services and devices. Furthermore, many studies have investigated the influence of the hospital environment on patients, in terms of stress-reduction and therapeutic intervention’ speed, but also on health professionals and their work. Several services around the world are organizing autism-friendly hospital environments which involve the architecture and the specific staff training. In Italy, the association Spes contra spem has promoted and published, in 2013, the ‘Chart of disabled people in the hospital’. It stipulates that disabled people should have equal rights to accessible and high-quality care. There are a few Italian examples of therapeutic programmes for autistic people as the Dama project in Milan and the recent experience of Children and Autism Foundation in Pordenone. Careggi’s Emergency Waiting Room in Florence has been built to satisfy this challenge. This project of research comes from a collaboration between the technical staff of Careggi Hospital, the Center for autism PAMAPI and some architects expert in the sensory environment. The methodology of focus group involved architects, psychologists and professionals through a transdisciplinary research, centered on the links between the spatial characteristics and clinical state of people with ASD. The relationship between architectural space and quality of life is studied to pay maximum attention to users’ needs and to support the medical staff in their work by a specific program of training. The result of this research is a sum of criteria used to design the emergency waiting room, that will be illustrated. A protected room, with a clear space design, maximizes comprehension and predictability. The multisensory environment is thought to help sensory integration and relaxation. Visual communication through Ipad allows an anticipated understanding of medical procedures, and a specific technological system supports requests, choices and self-determination in order to fit sensory stimulation to personal preferences, especially for hypo and hypersensitive people. All these characteristics should ensure a better regulation of the arousal, less behavior problems, improving treatment accessibility, safety, and effectiveness. First results about patient-satisfaction levels will be presented.Keywords: accessibility of care, autism-friendly architecture, personalized therapeutic process, sensory environment
Procedia PDF Downloads 265536 Design Aspects for Developing a Microfluidics Diagnostics Device Used for Low-Cost Water Quality Monitoring
Authors: Wenyu Guo, Malachy O’Rourke, Mark Bowkett, Michael Gilchrist
Abstract:
Many devices for real-time monitoring of surface water have been developed in the past few years to provide early warning of pollutions and so to decrease the risk of environmental pollution efficiently. One of the most common methodologies used in the detection system is a colorimetric process, in which a container with fixed volume is filled with target ions and reagents to combine a colorimetric dye. The colorimetric ions can sensitively absorb a specific-wavelength radiation beam, and its absorbance rate is proportional to the concentration of the fully developed product, indicating the concentration of target nutrients in the pre-mixed water samples. In order to achieve precise and rapid detection effect, channels with dimensions in the order of micrometers, i.e., microfluidic systems have been developed and introduced into these diagnostics studies. Microfluidics technology largely reduces the surface to volume ratios and decrease the samples/reagents consumption significantly. However, species transport in such miniaturized channels is limited by the low Reynolds numbers in the regimes. Thus, the flow is extremely laminar state, and diffusion is the dominant mass transport process all over the regimes of the microfluidic channels. The objective of this present work has been to analyse the mixing effect and chemistry kinetics in a stop-flow microfluidic device measuring Nitride concentrations in fresh water samples. In order to improve the temporal resolution of the Nitride microfluidic sensor, we have used computational fluid dynamics to investigate the influence that the effectiveness of the mixing process between the sample and reagent within a microfluidic device exerts on the time to completion of the resulting chemical reaction. This computational approach has been complemented by physical experiments. The kinetics of the Griess reaction involving the conversion of sulphanilic acid to a diazonium salt by reaction with nitrite in acidic solution is set in the Laminar Finite-rate chemical reaction in the model. Initially, a methodology was developed to assess the degree of mixing of the sample and reagent within the device. This enabled different designs of the mixing channel to be compared, such as straight, square wave and serpentine geometries. Thereafter, the time to completion of the Griess reaction within a straight mixing channel device was modeled and the reaction time validated with experimental data. Further simulations have been done to compare the reaction time to effective mixing within straight, square wave and serpentine geometries. Results show that square wave channels can significantly improve the mixing effect and provides a low standard deviations of the concentrations of nitride and reagent, while for straight channel microfluidic patterns the corresponding values are 2-3 orders of magnitude greater, and consequently are less efficiently mixed. This has allowed us to design novel channel patterns of micro-mixers with more effective mixing that can be used to detect and monitor levels of nutrients present in water samples, in particular, Nitride. Future generations of water quality monitoring and diagnostic devices will easily exploit this technology.Keywords: nitride detection, computational fluid dynamics, chemical kinetics, mixing effect
Procedia PDF Downloads 202535 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica
Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat
Abstract:
Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.Keywords: carmustine, silica, controlled, release
Procedia PDF Downloads 264534 Poly(Trimethylene Carbonate)/Poly(ε-Caprolactone) Phase-Separated Triblock Copolymers with Advanced Properties
Authors: Nikola Toshikj, Michel Ramonda, Sylvain Catrouillet, Jean-Jacques Robin, Sebastien Blanquer
Abstract:
Biodegradable and biocompatible block copolymers have risen as the golden materials in both medical and environmental applications. Moreover, if their architecture is of controlled manner, higher applications can be foreseen. In the meantime, organocatalytic ROP has been promoted as more rapid and immaculate route, compared to the traditional organometallic catalysis, towards efficient synthesis of block copolymer architectures. Therefore, herein we report novel organocatalytic pathway with guanidine molecules (TBD) for supported synthesis of trimethylene carbonate initiated by poly(caprolactone) as pre-polymer. Pristine PTMC-b-PCL-b-PTMC block copolymer structure, without any residual products and clear desired block proportions, was achieved under 1.5 hours at room temperature and verified by NMR spectroscopies and size-exclusion chromatography. Besides, when elaborating block copolymer films, further stability and amelioration of mechanical properties can be achieved via additional reticulation step of precedently methacrylated block copolymers. Subsequently, stimulated by the insufficient studies on the phase-separation/crystallinity relationship in these semi-crystalline block copolymer systems, their intrinsic thermal and morphology properties were investigated by differential scanning calorimetry and atomic force microscopy. Firstly, by DSC measurements, the block copolymers with χABN values superior to 20 presented two distinct glass transition temperatures, close to the ones of the respecting homopolymers, demonstrating an initial indication of a phase-separated system. In the interim, the existence of the crystalline phase was supported by the presence of melting temperature. As expected, the crystallinity driven phase-separated morphology predominated in the AFM analysis of the block copolymers. Neither crosslinking at melted state, hence creation of a dense polymer network, disturbed the crystallinity phenomena. However, the later revealed as sensible to rapid liquid nitrogen quenching directly from the melted state. Therefore, AFM analysis of liquid nitrogen quenched and crosslinked block copolymer films demonstrated a thermodynamically driven phase-separation clearly predominating over the originally crystalline one. These AFM films remained stable with their morphology unchanged even after 4 months at room temperature. However, as demonstrated by DSC analysis once rising the temperature above the melting temperature of the PCL block, neither the crosslinking nor the liquid nitrogen quenching shattered the semi-crystalline network, while the access to thermodynamical phase-separated structures was possible for temperatures under the poly (caprolactone) melting point. Precisely this coexistence of dual crosslinked/crystalline networks in the same copolymer structure allowed us to establish, for the first time, the shape-memory properties in such materials, as verified by thermomechanical analysis. Moreover, the response temperature to the material original shape depended on the block copolymer emplacement, hence PTMC or PCL as end-block. Therefore, it has been possible to reach a block copolymer with transition temperature around 40°C thus opening potential real-life medical applications. In conclusion, the initial study of phase-separation/crystallinity relationship in PTMC-b-PCL-b-PTMC block copolymers lead to the discovery of novel shape memory materials with superior properties, widely demanded in modern-life applications.Keywords: biodegradable block copolymers, organocatalytic ROP, self-assembly, shape-memory
Procedia PDF Downloads 128533 Application of Fatty Acid Salts for Antimicrobial Agents in Koji-Muro
Authors: Aya Tanaka, Mariko Era, Shiho Sakai, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita
Abstract:
Objectives: Aspergillus niger and Aspergillus oryzae are used as koji fungi in the spot of the brewing. Since koji-muro (room for making koji) was a low level of airtightness, microbial contamination has long been a concern to the alcoholic beverage production. Therefore, we focused on the fatty acid salt which is the main component of soap. Fatty acid salts have been reported to show some antibacterial and antifungal activity. So this study examined antimicrobial activities against Aspergillus and Bacillus spp. This study aimed to find the effectiveness of the fatty acid salt in koji-muro as antimicrobial agents. Materials & Methods: A. niger NBRC 31628, A. oryzae NBRC 5238, A. oryzae (Akita Konno store) and Bacillus subtilis NBRC 3335 were chosen as tested. Nine fatty acid salts including potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K) and linolenate (C18:3K) at 350 mM and pH 10.5 were used as antimicrobial activity. FASs and spore suspension were prepared in plastic tubes. The spore suspension of each fungus (3.0×104 spores/mL) or the bacterial suspension (3.0×105 CFU/mL) was mixed with each of the fatty acid salts (final concentration of 175 mM). The mixtures were incubated at 25 ℃. Samples were counted at 0, 10, 60, and 180 min by plating (100 µL) on potato dextrose agar. Fungal and bacterial colonies were counted after incubation for 1 or 2 days at 30 ℃. The MIC (minimum inhibitory concentration) is defined as the lowest concentration of drug sufficient for inhibiting visible growth of spore after 10 min of incubation. MICs against fungi and bacteria were determined using the two-fold dilution method. Each fatty acid salt was separately inoculated with 400 µL of Aspergillus spp. or B. subtilis NBRC 3335 at 3.0 × 104 spores/mL or 3.0 × 105 CFU/mL. Results: No obvious change was observed in tested fatty acid salts against A. niger and A. oryzae. However, C12K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. Thus, C12K suppressed 99.999 % of bacterial growth. Besides, C10K was the antibacterial effect of 5 log-unit incubated time for 180 min against B. subtilis. C18:1K, C18:2K and C18:3K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. However, compared to saturated fatty acid salts to unsaturated fatty acid salts, saturated fatty acid salts are lower cost. These results suggest C12K has potential in the field of koji-muro. It is necessary to evaluate the antimicrobial activity against other fungi and bacteria, in the future.Keywords: Aspergillus, antimicrobial, fatty acid salts, koji-muro
Procedia PDF Downloads 554532 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 283531 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 317530 Development of Peptide Inhibitors against Dengue Virus Infection by in Silico Design
Authors: Aussara Panya, Nunghathai Sawasdee, Mutita Junking, Chatchawan Srisawat, Kiattawee Choowongkomon, Pa-Thai Yenchitsomanus
Abstract:
Dengue virus (DENV) infection is a global public health problem with approximately 100 million infected cases a year. Presently, there is no approved vaccine or effective drug available; therefore, the development of anti-DENV drug is urgently needed. The clinical reports revealing the positive association between the disease severity and viral titer has been reported previously suggesting that the anti-DENV drug therapy can possibly ameliorate the disease severity. Although several anti-DENV agents showed inhibitory activities against DENV infection, to date none of them accomplishes clinical use in the patients. The surface envelope (E) protein of DENV is critical for the viral entry step, which includes attachment and membrane fusion; thus, the blocking of envelope protein is an attractive strategy for anti-DENV drug development. To search the safe anti-DENV agent, this study aimed to search for novel peptide inhibitors to counter DENV infection through the targeting of E protein using a structure-based in silico design. Two selected strategies has been used including to identify the peptide inhibitor which interfere the membrane fusion process whereby the hydrophobic pocket on the E protein was the target, the destabilization of virion structure organization through the disruption of the interaction between the envelope and membrane proteins, respectively. The molecular docking technique has been used in the first strategy to search for the peptide inhibitors that specifically bind to the hydrophobic pocket. The second strategy, the peptide inhibitor has been designed to mimic the ectodomain portion of membrane protein to disrupt the protein-protein interaction. The designed peptides were tested for the effects on cell viability to measure the toxic to peptide to the cells and their inhibitory assay to inhibit the DENV infection in Vero cells. Furthermore, their antiviral effects on viral replication, intracellular protein level and viral production have been observed by using the qPCR, cell-based flavivirus immunodetection and immunofluorescence assay. None of tested peptides showed the significant effect on cell viability. The small peptide inhibitors achieved from molecular docking, Glu-Phe (EF), effectively inhibited DENV infection in cell culture system. Its most potential effect was observed for DENV2 with a half maximal inhibition concentration (IC50) of 96 μM, but it partially inhibited other serotypes. Treatment of EF at 200 µM on infected cells also significantly reduced the viral genome and protein to 83.47% and 84.15%, respectively, corresponding to the reduction of infected cell numbers. An additional approach was carried out by using peptide mimicking membrane (M) protein, namely MLH40. Treatment of MLH40 caused the reduction of foci formation in four individual DENV serotype (DENV1-4) with IC50 of 24-31 μM. Further characterization suggested that the MLH40 specifically blocked viral attachment to host membrane, and treatment with 100 μM could diminish 80% of viral attachment. In summary, targeting the hydrophobic pocket and M-binding site on the E protein by using the peptide inhibitors could inhibit DENV infection. The results provide proof of-concept for the development of antiviral therapeutic peptide inhibitors to counter DENV infection through the use of a structure-based design targeting conserved viral protein.Keywords: dengue virus, dengue virus infection, drug design, peptide inhibitor
Procedia PDF Downloads 357529 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach
Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa
Abstract:
Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation
Procedia PDF Downloads 189528 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion
Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty
Abstract:
Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors
Procedia PDF Downloads 154527 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108526 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective
Authors: S. Fantin
Abstract:
This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.Keywords: cybersecurity, data protection, European Union, Japan
Procedia PDF Downloads 124525 Monitoring Key Biomarkers Related to the Risk of Low Breastmilk Production in Women, Leading to a Positive Impact in Infant’s Health
Authors: R. Sanchez-Salcedo, N. H. Voelcker
Abstract:
Currently, low breast milk production in women is one of the leading health complications in infants. Recently, It has been demonstrated that exclusive breastfeeding, especially up to a minimum of 6 months, significantly reduces respiratory and gastrointestinal infections, which are the main causes of death in infants. However, the current data shows that a high percentage of women stop breastfeeding their children because they perceive an inadequate supply of milk, and only 45% of children are breastfeeding under 6 months. It is, therefore, clear the necessity to design and develop a biosensor that is sensitive and selective enough to identify and validate a panel of milk biomarkers that allow the early diagnosis of this condition. In this context, electrochemical biosensors could be a powerful tool for assessing all the requirements in terms of reliability, selectivity, sensitivity, cost efficiency and potential for multiplex detection. Moreover, they are suitable for the development of POC devices and wearable sensors. In this work, we report the development of two types of sensing platforms towards several biomarkers, including miRNAs and hormones present in breast milk and dysregulated in this pathological condition. The first type of sensing platform consists of an enzymatic sensor for the detection of lactose, one of the main components in milk. In this design, we used gold surface as an electrochemical transducer due to the several advantages, such as the variety of strategies available for its rapid and efficient functionalization with bioreceptors or capture molecules. For the second type of sensing platform, nanoporous silicon film (pSi) was chosen as the electrode material for the design of DNA sensors and aptasensors targeting miRNAs and hormones, respectively. pSi matrix offers a large superficial area with an abundance of active sites for the immobilization of bioreceptors and tunable characteristics, which increase the selectivity and specificity, making it an ideal alternative material. The analytical performance of the designed biosensors was not only characterized in buffer but also validated in minimally treated breastmilk samples. We have demonstrated the potential of an electrochemical transducer on pSi and gold surface for monitoring clinically relevant biomarkers associated with the heightened risk of low milk production in women. This approach, in which the nanofabrication techniques and the functionalization methods were optimized to increase the efficacy of the biosensor highly provided a foundation for further research and development of targeted diagnosis strategies.Keywords: biosensors, electrochemistry, early diagnosis, clinical markers, miRNAs
Procedia PDF Downloads 19524 Smallholder’s Agricultural Water Management Technology Adoption, Adoption Intensity and Their Determinants: The Case of Meda Welabu Woreda, Oromia, Ethiopia
Authors: Naod Mekonnen Anega
Abstract:
The very objective of this paper was to empirically identify technology tailored determinants to the adoption and adoption intensity (extent of use) of agricultural water management technologies in Meda Welabu Woreda, Oromia regional state, Ethiopia. Meda Welabu Woreda which is one of the administrative Woredas of the Oromia regional state was selected purposively as this Woreda is one of the Woredas in the region where small scale irrigation practices and the use of agricultural water management technologies can be found among smallholders. Using the existence water management practices (use of water management technologies) and land use pattern as a criterion Genale Mekchira Kebele is selected to undergo the study. A total of 200 smallholders were selected from the Kebele using the technique developed by Krejeie and Morgan. The study employed the Logit and Tobit models to estimate and identify the economic, social, geographical, household, institutional, psychological, technological factors that determine adoption and adoption intensity of water management technologies. The study revealed that while 55 of the sampled households are adopters of agricultural water management technology the rest 140 were non adopters of the technologies. Among the adopters included in the sample 97% are using river diversion technology (traditional) with traditional canal while the rest 7% percent are using pond with treadle pump technology. The Logit estimation reveled that while adoption of river diversion is positively and significantly affected by membership to local institutions, active labor force, income, access to credit and land ownership, adoption of treadle pump technology is positively and significantly affected by family size, education level, access to credit, extension contact, income, access to market, and slope. The Logit estimation also revealed that whereas, group action requirement, distance to farm, and size of active labor force negative and significantly influenced adoption of river diversion, age and perception has negatively and significantly influenced adoption decision of treadle pump technology. On the other hand, the Tobit estimation reveled that while adoption intensity (extent of use) of agricultural water management is positively and significantly affected by education, credit, and extension contact, access to credit, access to market and income. This study revealed that technology tailored study on adoption of Agricultural water management technologies (AWMTs) should be considered to indentify and scale up best agricultural water management practices. In fact, in countries like Ethiopia, where there is difference in social, economic, cultural, environmental and agro ecological conditions even within the same Kebele technology tailored study that fit the condition of each Kebele would help to identify and scale up best practices in agricultural water management.Keywords: water management technology, adoption, adoption intensity, smallholders, technology tailored approach
Procedia PDF Downloads 454523 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study
Authors: Kaaryn Cater
Abstract:
People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity
Procedia PDF Downloads 65522 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning
Abstract:
Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.Keywords: Construction learning, Corpus-based, Progressives, Prototype
Procedia PDF Downloads 128521 Making Unorganized Social Groups Responsible for Climate Change: Structural Analysis
Authors: Vojtěch Svěrák
Abstract:
Climate change ethics have recently shifted away from individualistic paradigms towards concepts of shared or collective responsibility. Despite this evolving trend, a noticeable gap remains: a lack of research exclusively addressing the moral responsibility of specific unorganized social groups. The primary objective of the article is to fill this gap. The article employs the structuralist methodological approach proposed by some feminist philosophers, utilizing structural analysis to explain the existence of social groups. The argument is made for the integration of this framework with the so-called forward-looking Social Connection Model (SCM) of responsibility, which ascribes responsibilities to individuals based on their participation in social structures. The article offers an extension of this model to justify the responsibility of unorganized social groups. The major finding of the study is that although members of unorganized groups are loosely connected, collectively they instantiate specific external social structures, share social positioning, and the notion of responsibility could be based on that. Specifically, if the structure produces harm or perpetuates injustices, and the group both benefits from and possesses the capacity to significantly influence the structure, a greater degree of responsibility should be attributed to the group as a whole. This thesis is applied and justified within the context of climate change, based on the asymmetrical positioning of different social groups. Climate change creates a triple inequality: in contribution, vulnerability, and mitigation. The study posits that different degrees of group responsibility could be drawn from these inequalities. Two social groups serve as a case study for the article: first, the Pakistan lower class, consisting of people living below the national poverty line, with a low greenhouse gas emissions rate, severe climate change-related vulnerability due to the lack of adaptation measures, and with very limited options to participate in the mitigation of climate change. Second, the so-called polluter elite, defined by members' investments in polluting companies and high-carbon lifestyles, thus with an interest in the continuation of structures leading to climate change. The first identified group cannot be held responsible for climate change, but their group interest lies in structural change and should be collectively maintained. On the other hand, the responsibility of the second identified group is significant and can be fulfilled by a justified demand for some political changes. The proposed approach of group responsibility is suggested to help navigate climate justice discourse and environmental policies, thus helping with the sustainability transition.Keywords: collective responsibility, climate justice, climate change ethics, group responsibility, social ontology, structural analysis
Procedia PDF Downloads 60520 Assessment of Microclimate in Abu Dhabi Neighborhoods: On the Utilization of Native Landscape in Enhancing Thermal Comfort
Authors: Maryam Al Mheiri, Khaled Al Awadi
Abstract:
Urban population is continuously increasing worldwide and the speed at which cities urbanize creates major challenges, particularly in terms of creating sustainable urban environments. Rapid urbanization often leads to negative environmental impacts and changes in the urban microclimates. Moreover, when rapid urbanization is paired with limited landscape elements, the effects on human health due to the increased pollution, and thermal comfort due to Urban Heat Island effects are increased. Urban Heat Island (UHI) describes the increase of urban temperatures in urban areas in comparison to its rural surroundings, and, as we discuss in this paper, it impacts on pedestrian comfort, reducing the number of walking trips and public space use. It is thus very necessary to investigate the quality of outdoor built environments in order to improve the quality of life incites. The main objective of this paper is to address the morphology of Emirati neighborhoods, setting a quantitative baseline by which to assess and compare spatial characteristics and microclimate performance of existing typologies in Abu Dhabi. This morphological mapping and analysis will help to understand the built landscape of Emirati neighborhoods in this city, whose form has changed and evolved across different periods. This will eventually help to model the use of different design strategies, such as landscaping, to mitigate UHI effects and enhance outdoor urban comfort. Further, the impact of different native plants types and native species in reducing UHI effects and enhancing outdoor urban comfort, allowing for the assessment of the impact of increasing landscaped areas in these neighborhoods. This study uses ENVI-met, an analytical, three-dimensional, high-resolution microclimate modeling software. This micro-scale urban climate model will be used to evaluate existing conditions and generate scenarios in different residential areas, with different vegetation surfaces and landscaping, and examine their impact on surface temperatures during summer and autumn. In parallel to these simulations, field measurement will be included to calibrate the Envi-met model. This research therefore takes an experimental approach, using simulation software, and a case study strategy for the evaluation of a sample of residential neighborhoods. A comparison of the results of these scenarios constitute a first step towards making recommendations about what constitutes sustainable landscapes for Abu Dhabi neighborhoods.Keywords: landscape, microclimate, native plants, sustainable neighborhoods, thermal comfort, urban heat island
Procedia PDF Downloads 310519 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk
Procedia PDF Downloads 243518 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami
Authors: Alejandra G. Barbón, José Vila, Dania Vazquez
Abstract:
This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience
Procedia PDF Downloads 75517 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60516 Mineralogical Study of the Triassic Clay of Maaziz and the Miocene Marl of Akrach in Morocco: Analysis and Evaluating of the Two Geomaterials for the Construction of Ceramic Bricks
Authors: Sahar El Kasmi, Ayoub Aziz, Saadia Lharti, Mohammed El Janati, Boubker Boukili, Nacer El Motawakil, Mayom Chol Luka Awan
Abstract:
Two types of geomaterials (Red Triassic clay from the Maaziz region and Yellow Pliocene clay from the Akrach region) were used to create different mixtures for the fabrication of ceramic bricks. This study investigated the influence of the Pliocene clay on the overall composition and mechanical properties of the Triassic clay. The red Triassic clay, sourced from Maaziz, underwent various mechanical processes and treatments to facilitate its transformation into ceramic bricks for construction. The triassic clay was subjected to a drying chamber and a heating chamber at 100°C to remove moisture. Subsequently, the dried clay samples were processed using a Planetary Babs ll Mill to reduce particle size and improve homogeneity. The resulting clay material was sieved, and the fine particles below 100 mm were collected for further analysis. In parallel, the Miocene marl obtained from the Akrach region was fragmented into finer particles and subjected to similar drying, grinding, and sieving procedures as the triassic clay. The two clay samples are then amalgamated and homogenized in different proportions. Precise measurements were taken using a weighing balance, and mixtures of 90%, 80%, and 70% Triassic clay with 10%, 20%, and 30% yellow clay were prepared, respectively. To evaluate the impact of Pliocene marl on the composition, the prepared clay mixtures were spread evenly and treated with a water modifier to enhance plasticity. The clay was then molded using a brick-making machine, and the initial manipulation process was observed. Additional batches were prepared with incremental amounts of Pliocene marl to further investigate its effect on the fracture behavior of the clay, specifically their resistance. The molded clay bricks were subjected to compression tests to measure their strength and resistance to deformation. Additional tests, such as water absorption tests, were also conducted to assess the overall performance of the ceramic bricks fabricated from the different clay mixtures. The results were analyzed to determine the influence of the Pliocene marl on the strength and durability of the Triassic clay bricks. The results indicated that the incorporation of Pliocene clay reduced the fracture of the triassic clay, with a noticeable reduction observed at 10% addition. No fractures were observed when 20% and 30% of yellow clay are added. These findings suggested that yellow clay can enhance the mechanical properties and structural integrity of red clay-based products.Keywords: triassic clay, pliocene clay, mineralogical composition, geo-materials, ceramics, akach region, maaziz region, morocco.
Procedia PDF Downloads 88515 Enhancing Precision in Abdominal External Beam Radiation Therapy: Exhale Breath Hold Technique for Respiratory Motion Management
Authors: Stephanie P. Nigro
Abstract:
The Exhale Breath Hold (EBH) technique presents a promising approach to enhance the precision and efficacy of External Beam Radiation Therapy (EBRT) for abdominal tumours, which include liver, pancreas, kidney, and adrenal glands. These tumours are challenging to treat due to their proximity to organs at risk (OARs) and the significant motion induced by respiration and physiological variations, such as stomach filling. Respiratory motion can cause up to 40mm of displacement in abdominal organs, complicating accurate targeting. While current practices like limiting fasting help reduce motion related to digestive processes, they do not address respiratory motion. 4DCT scans are used to assess this motion, but they require extensive workflow time and expose patients to higher doses of radiation. The EBH technique, which involves holding the breath in an exhale with no air in the lungs, stabilizes internal organ motion, thereby reducing respiratory-induced motion. The primary benefit of EBH is the reduction in treatment volume sizes, specifically the Internal Target Volume (ITV) and Planning Target Volume (PTV), as demonstrated by smaller ITVs when gated in EBH. This reduction also improves the quality of 3D Cone Beam CT (CBCT) images by minimizing respiratory artifacts, facilitating soft tissue matching akin to stereotactic treatments. Patients suitable for EBH must meet criteria including the ability to hold their breath for at least 15 seconds and maintain a consistent breathing pattern. For those who do not qualify, the traditional 4DCT protocol will be used. The implementation involves an EBH planning scan and additional short EBH scans to ensure reproducibility and assist in contouring and volume expansions, with a Free Breathing (FB) scan used for setup purposes. Treatment planning on EBH scans leads to smaller PTVs, though intrafractional and interfractional breath hold variations must be accounted for in margins. The treatment decision process includes performing CBCT in EBH intervals, with careful matching and adjustment based on soft tissue and fiducial markers. Initial studies at two sites will evaluate the necessity of multiple CBCTs, assessing shifts and the benefits of initial versus mid-treatment CBCT. Considerations for successful implementation include thorough patient coaching, staff training, and verification of breath holds, despite potential disadvantages such as longer treatment times and patient exhaustion. Overall, the EBH technique offers significant improvements in the accuracy and quality of abdominal EBRT, paving the way for more effective and safer treatments for patients.Keywords: abdominal cancers, exhale breath hold, radiation therapy, respiratory motion
Procedia PDF Downloads 27514 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation
Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu
Abstract:
Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing
Procedia PDF Downloads 161513 Social and Culture Capital in Patthana Soi Ranongklang Community, Dusit District, Bangkok
Authors: Phusit Phukamchanoad, Bua Srikos
Abstract:
Research aimed to study the characteristics of a community in the social, economical and cultural context. This research used interviews and surveys members in Patthana Soi Ranongklang community, Dusit District, Bangkok. The results are as follows: In terms of overall conditions and characteristics, Patthana Soi Ranongklang community is located on the property of Treasury Department. 50 years ago the location of this community consisted of paddy fields with limited convenience in terms of transportation. Rama V Road was only a small narrow road with only three-wheelers and no buses. The majority of community members moved in from Makkhawan Rangsan Bridge. Thus, most community members were either workers or government officials as they were not the owners of the land. Therefore, there were no primary occupations within this 7 acres of the community. The development of the community started in 1981. At present, the community is continuously being developed and modernization is rapidly flowing in. One of the reasons was because main roads were amended, especially Rama V Road that allows more convenient transportation, leading to heightened citizens’ convenience. In terms of the economy and society, the research found out that the development and expansion of Rama V Road cause a change in the conditions of the area and buildings. Some building were improved and changed along the time, as well as the development of new facilities that cause the community members to continually become more materialistic. Jobs within the community started to appear, and areas were improved to allow for new building and housing businesses. The trend of jobs become more in variety, in terms of both jobs at home, such as workers, merchandizing, and small own businesses, and jobs outside the community, which became much more convenient as car drivers are used to the narrow roads inside the community. The location of the community next to Rama V Road also allows helo from government agencies to reach the community with ease. Moreover, the welfare of the community was well taken care of by the community committee. In terms of education, the research found that there are two schools: Wat Pracharabuedham School and Wat Noi Noppakun School, that are providing education within the community. The majority of the community received Bachelor degrees. In areas of culture, the research found that the culture, traditions, and beliefs of people in the community were mainly transferred from the old community, especially beliefs in Buddhism as the majority are Bhuddists. The main reason is because the old community was situated near Wat Makut Kasattriyaram. Therefore, the community members have always had Buddhist temples as the center of the community. In later years, more citizens moved in and bring along culture, traditions, and beliefs with them. The community members also took part in building a Dharma hall named Wat Duang Jai 72 Years Ranong Klang. Traditions that community members adhere to since the establishment of the community are the New Year merit making and Songkran Tradition.Keywords: social capital, culture, Patthana Soi Ranongklang community, way of life
Procedia PDF Downloads 452