Search results for: low light image enhancement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7368

Search results for: low light image enhancement

138 Navigating the Future: Evaluating the Market Potential and Drivers for High-Definition Mapping in the Autonomous Vehicle Era

Authors: Loha Hashimy, Isabella Castillo

Abstract:

In today's rapidly evolving technological landscape, the importance of precise navigation and mapping systems cannot be understated. As various sectors undergo transformative changes, the market potential for Advanced Mapping and Management Systems (AMMS) emerges as a critical focus area. The Galileo/GNSS-Based Autonomous Mobile Mapping System (GAMMS) project, specifically targeted toward high-definition mapping (HDM), endeavours to provide insights into this market within the broader context of the geomatics and navigation fields. With the growing integration of Autonomous Vehicles (AVs) into our transportation systems, the relevance and demand for sophisticated mapping solutions like HDM have become increasingly pertinent. The research employed a meticulous, lean, stepwise, and interconnected methodology to ensure a comprehensive assessment. Beginning with the identification of pivotal project results, the study progressed into a systematic market screening. This was complemented by an exhaustive desk research phase that delved into existing literature, data, and trends. To ensure the holistic validity of the findings, extensive consultations were conducted. Academia and industry experts provided invaluable insights through interviews, questionnaires, and surveys. This multi-faceted approach facilitated a layered analysis, juxtaposing secondary data with primary inputs, ensuring that the conclusions were both accurate and actionable. Our investigation unearthed a plethora of drivers steering the HD maps landscape. These ranged from technological leaps, nuanced market demands, and influential economic factors to overarching socio-political shifts. The meteoric rise of Autonomous Vehicles (AVs) and the shift towards app-based transportation solutions, such as Uber, stood out as significant market pull factors. A nuanced PESTEL analysis further enriched our understanding, shedding light on political, economic, social, technological, environmental, and legal facets influencing the HD maps market trajectory. Simultaneously, potential roadblocks were identified. Notable among these were barriers related to high initial costs, concerns around data quality, and the challenges posed by a fragmented and evolving regulatory landscape. The GAMMS project serves as a beacon, illuminating the vast opportunities that lie ahead for the HD mapping sector. It underscores the indispensable role of HDM in enhancing navigation, ensuring safety, and providing pinpoint, accurate location services. As our world becomes more interconnected and reliant on technology, HD maps emerge as a linchpin, bridging gaps and enabling seamless experiences. The research findings accentuate the imperative for stakeholders across industries to recognize and harness the potential of HD mapping, especially as we stand on the cusp of a transportation revolution heralded by Autonomous Vehicles and advanced geomatic solutions.

Keywords: high-definition mapping (HDM), autonomous vehicles, PESTEL analysis, market drivers

Procedia PDF Downloads 52
137 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 122
136 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.

Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor

Procedia PDF Downloads 103
135 The Impact of China’s Waste Import Ban on the Waste Mining Economy in East Asia

Authors: Michael Picard

Abstract:

This proposal offers to shed light on the changing legal geography of the global waste economy. Global waste recycling has become a multi-billion-dollar industry. NASDAQ predicts the emergence of a worldwide 1,296G$ waste management market between 2017 and 2022. Underlining this evolution, a new generation of preferential waste-trade agreements has emerged in the Pacific. In the last decade, Japan has concluded a series of bilateral treaties with Asian countries, and most recently with China. An agreement between Tokyo and Beijing was formalized on 7 May 2008, which forged an economic partnership on waste transfer and mining. The agreement set up International Recycling Zones, where certified recycling plants in China process industrial waste imported from Japan. Under the joint venture, Chinese companies salvage the embedded value from Japanese industrial discards, reprocess them and send them back to Japanese manufacturers, such as Mitsubishi and Panasonic. This circular economy is designed to convert surplus garbage into surplus value. Ever since the opening of Sino-Japanese eco-parks, millions of tons of plastic and e-waste have been exported from Japan to China every year. Yet, quite unexpectedly, China has recently closed its waste market to imports, jeopardizing Japan’s billion-dollar exports to China. China notified the WTO that, by the end of 2017, it would no longer accept imports of plastics and certain metals. Given China’s share of Japanese waste exports, a complete closure of China’s market would require Japan to find new uses for its recyclable industrial trash generated domestically every year. It remains to be seen how China will effectively implement its ban on waste imports, considering the economic interests at stake. At this stage, what remains to be clarified is whether China's ban on waste imports will negatively affect the recycling trade between Japan and China. What is clear, though, is the rapid transformation in the legal geography of waste mining in East-Asia. For decades, East-Asian waste trade had been tied up in an ‘ecologically unequal exchange’ between the Japanese core and the Chinese periphery. This global unequal waste distribution could be measured by the Environmental Stringency Index, which revealed that waste regulation was 39% weaker in the Global South than in Japan. This explains why Japan could legally export its hazardous plastic and electronic discards to China. The asymmetric flow of hazardous waste between Japan and China carried the colonial heritage of international law. The legal geography of waste distribution was closely associated to the imperial construction of an ecological trade imbalance between the Japanese source and the Chinese sink. Thus, China’s recent decision to ban hazardous waste imports is a sign of a broader ecological shift. As a global economic superpower, China announced to the world it would no longer be the planet’s junkyard. The policy change will have profound consequences on the global circulation of waste, re-routing global waste towards countries south of China, such as Vietnam and Malaysia. By the time the Berlin Conference takes place in May 2018, the presentation will be able to assess more accurately the effect of the Chinese ban on the transboundary movement of waste in Asia.

Keywords: Asia, ecological unequal exchange, global waste trade, legal geography

Procedia PDF Downloads 193
134 Rheological Properties of Thermoresponsive Poly(N-Vinylcaprolactam)-g-Collagen Hydrogel

Authors: Serap Durkut, A. Eser Elcin, Y. Murat Elcin

Abstract:

Stimuli-sensitive polymeric hydrogels have received extensive attention in the biomedical field due to their sensitivity to physical and chemical stimuli (temperature, pH, ionic strength, light, etc.). This study describes the rheological properties of a novel thermoresponsive poly(N-vinylcaprolactam)-g-collagen hydrogel. In the study, we first synthesized a facile and novel synthetic carboxyl group-terminated thermo-responsive poly(N-vinylcaprolactam)-COOH (PNVCL-COOH) via free radical polymerization. Further, this compound was effectively grafted with native collagen, by utilizing the covalent bond between the carboxylic acid groups at the end of the chains and amine groups of the collagen using cross-linking agent (EDC/NHS), forming PNVCL-g-Col. Newly-formed hybrid hydrogel displayed novel properties, such as increased mechanical strength and thermoresponsive characteristics. PNVCL-g-Col showed low critical solution temperature (LCST) at 38ºC, which is very close to the body temperature. Rheological studies determine structural–mechanical properties of the materials and serve as a valuable tool for characterizing. The rheological properties of hydrogels are described in terms of two dynamic mechanical properties: the elastic modulus G′ (also known as dynamic rigidity) representing the reversible stored energy of the system, and the viscous modulus G″, representing the irreversible energy loss. In order to characterize the PNVCL-g-Col, the rheological properties were measured in terms of the function of temperature and time during phase transition. Below the LCST, favorable interactions allowed the dissolution of the polymer in water via hydrogen bonding. At temperatures above the LCST, PNVCL molecules within PNVCL-g-Col aggregated due to dehydration, causing the hydrogel structure to become dense. When the temperature reached ~36ºC, both the G′ and G″ values crossed over. This indicates that PNVCL-g-Col underwent a sol-gel transition, forming an elastic network. Following temperature plateau at 38ºC, near human body temperature the sample displayed stable elastic network characteristics. The G′ and G″ values of the PNVCL-g-Col solutions sharply increased at 6-9 minute interval, due to rapid transformation into gel-like state and formation of elastic networks. Copolymerization with collagen leads to an increase in G′, as collagen structure contains a flexible polymer chain, which bestows its elastic properties. Elasticity of the proposed structure correlates with the number of intermolecular cross-links in the hydrogel network, increasing viscosity. However, at 8 minutes, G′ and G″ values sharply decreased for pure collagen solutions due to the decomposition of the elastic and viscose network. Complex viscosity is related to the mechanical performance and resistance opposing deformation of the hydrogel. Complex viscosity of PNVCL-g-Col hydrogel was drastically changed with temperature and the mechanical performance of PNVCL-g-Col hydrogel network increased, exhibiting lesser deformation. Rheological assessment of the novel thermo-responsive PNVCL-g-Col hydrogel, exhibited that the network has stronger mechanical properties due to both permanent stable covalent bonds and physical interactions, such as hydrogen- and hydrophobic bonds depending on temperature.

Keywords: poly(N-vinylcaprolactam)-g-collagen, thermoresponsive polymer, rheology, elastic modulus, stimuli-sensitive

Procedia PDF Downloads 221
133 Genomic and Proteomic Variability in Glycine Max Genotypes in Response to Salt Stress

Authors: Faheema Khan

Abstract:

To investigate the ability of sensitive and tolerant genotype of Glycine max to adapt to a saline environment in a field, we examined the growth performance, water relation and activities of antioxidant enzymes in relation to photosynthetic rate, chlorophyll a fluorescence, photosynthetic pigment concentration, protein and proline in plants exposed to salt stress. Ten soybean genotypes (Pusa-20, Pusa-40, Pusa-37, Pusa-16, Pusa-24, Pusa-22, BRAGG, PK-416, PK-1042, and DS-9712) were selected and grown hydroponically. After 3 days of proper germination, the seedlings were transferred to Hoagland’s solution (Hoagland and Arnon 1950). The growth chamber was maintained at a photosynthetic photon flux density of 430 μmol m−2 s−1, 14 h of light, 10 h of dark and a relative humidity of 60%. The nutrient solution was bubbled with sterile air and changed on alternate days. Ten-day-old seedlings were given seven levels of salt in the form of NaCl viz., T1 = 0 mM NaCl, T2=25 mM NaCl, T3=50 mM NaCl, T4=75 mM NaCl, T5=100 mM NaCl, T6=125 mM NaCl, T7=150 mM NaCl. The investigation showed that genotype Pusa-24, PK-416 and Pusa-20 appeared to be the most salt-sensitive. genotypes as inferred from their significantly reduced length, fresh weight and dry weight in response to the NaCl exposure. Pusa-37 appeared to be the most tolerant genotype since no significant effect of NaCl treatment on growth was found. We observed a greater decline in the photosynthetic variables like photosynthetic rate, chlorophyll fluorescence and chlorophyll content, in salt-sensitive (Pusa-24) genotype than in salt-tolerant Pusa-37 under high salinity. Numerous primers were verified on ten soybean genotypes obtained from Operon technologies among which 30 RAPD primers shown high polymorphism and genetic variation. The Jaccard’s similarity coefficient values for each pairwise comparison between cultivars were calculated and similarity coefficient matrix was constructed. The closer varieties in the cluster behaved similar in their response to salinity tolerance. Intra-clustering within the two clusters precisely grouped the 10 genotypes in sub-cluster as expected from their physiological findings.Salt tolerant genotype Pusa-37, was further analysed by 2-Dimensional gel electrophoresis to analyse the differential expression of proteins at high salt stress. In the Present study, 173 protein spots were identified. Of these, 40 proteins responsive to salinity were either up- or down-regulated in Pusa-37. Proteomic analysis in salt-tolerant genotype (Pusa-37) led to the detection of proteins involved in a variety of biological processes, such as protein synthesis (12 %), redox regulation (19 %), primary and secondary metabolism (25 %), or disease- and defence-related processes (32 %). In conclusion, the soybean plants in our study responded to salt stress by changing their protein expression pattern. The photosynthetic, biochemical and molecular study showed that there is variability in salt tolerance behaviour in soybean genotypes. Pusa-24 is the salt-sensitive and Pusa-37 is the salt-tolerant genotype. Moreover this study gives new insights into the salt-stress response in soybean and demonstrates the power of genomic and proteomic approach in plant biology studies which finally could help us in identifying the possible regulatory switches (gene/s) controlling the salt tolerant genotype of the crop plants and their possible role in defence mechanism.

Keywords: glycine max, salt stress, RAPD, genomic and proteomic variability

Procedia PDF Downloads 396
132 Mesenchymal Stem Cells (MSC)-Derived Exosomes Could Alleviate Neuronal Damage and Neuroinflammation in Alzheimer’s Disease (AD) as Potential Therapy-Carrier Dual Roles

Authors: Huan Peng, Chenye Zeng, Zhao Wang

Abstract:

Alzheimer’s disease (AD) is an age-related neurodegenerative disease that is a leading cause of dementia syndromes and has become a huge burden on society and families. The main pathological features of AD involve excessive deposition of β-amyloid (Aβ) and Tau proteins in the brain, resulting in loss of neurons, expansion of neuroinflammation, and cognitive dysfunction in patients. Researchers have found effective drugs to clear the brain of error-accumulating proteins or to slow the loss of neurons, but their direct administration has key bottlenecks such as single-drug limitation, rapid blood clearance rate, impenetrable blood-brain barrier (BBB), and poor ability to target tissues and cells. Therefore, we are committed to seeking a suitable and efficient delivery system. Inspired by the possibility that exosomes may be involved in the secretion and transport mechanism of many signaling molecules or proteins in the brain, exosomes have attracted extensive attention as natural nanoscale drug carriers. We selected exosomes derived from bone marrow mesenchymal stem cells (MSC-EXO) with low immunogenicity and exosomes derived from hippocampal neurons (HT22-EXO) that may have excellent homing ability to overcome the deficiencies of oral or injectable pathways and bypass the BBB through nasal administration and evaluated their delivery ability and effect on AD. First, MSC-EXO and HT22 cells were isolated and cultured, and MSCs were identified by microimaging and flow cytometry. Then MSC-EXO and HT22-EXO were obtained by gradient centrifugation and qEV SEC separation column, and a series of physicochemical characterization were performed by transmission electron microscope, western blot, nanoparticle tracking analysis and dynamic light scattering. Next, exosomes labeled with lipophilic fluorescent dye were administered to WT mice and APP/PS1 mice to obtain fluorescence images of various organs at different times. Finally, APP/PS1 mice were administered intranasally with two exosomes 20 times over 40 days and 20 μL each time. Behavioral analysis and pathological section analysis of the hippocampus were performed after the experiment. The results showed that MSC-EXO and HT22-EXO were successfully isolated and characterized, and they had good biocompatibility. MSC-EXO showed excellent brain enrichment in APP/PS1 mice after intranasal administration, could improve the neuronal damage and reduce inflammation levels in the hippocampus of APP/PS1 mice, and the improvement effect was significantly better than HT22-EXO. However, intranasal administration of the two exosomes did not cause depression and anxious-like phenotypes in APP/PS1 mice, nor significantly improved the short-term or spatial learning and memory ability of APP/PS1 mice, and had no significant effect on the content of Aβ plaques in the hippocampus, which also meant that MSC-EXO could use their own advantages in combination with other drugs to clear Aβ plaques. The possibility of realizing highly effective non-invasive synergistic treatment for AD provides new strategies and ideas for clinical research.

Keywords: Alzheimer’s disease, exosomes derived from mesenchymal stem cell, intranasal administration, therapy-carrier dual roles

Procedia PDF Downloads 30
131 Leveraging the HDAC Inhibitory Pharmacophore to Construct Deoxyvasicinone Based Tractable Anti-Lung Cancer Agent and pH-Responsive Nanocarrier

Authors: Ram Sharma, Esha Chatterjee, Santosh Kumar Guru, Kunal Nepali

Abstract:

A tractable anti-lung cancer agent was identified via the installation of a Ring C expanded synthetic analogue of the alkaloid vasicinone [7,8,9,10-tetrahydroazepino[2,1-b] quinazolin-12(6H)-one (TAZQ)] as a surface recognition part in the HDAC inhibitory three-component model. Noteworthy to mention that the candidature of TAZQ was deemed suitable for accommodation in HDAC inhibitory pharmacophore as per the results of the fragment recruitment process conducted by our laboratory. TAZQ was pinpointed through the fragment screening program as a synthetically flexible fragment endowed with some moderate cell growth inhibitory activity against the lung cancer cell lines, and it was anticipated that the use of the aforementioned fragment to generate hydroxamic acid functionality (zinc-binding motif) bearing HDAC inhibitors would boost the antitumor efficacy of TAZQ. Consistent with our aim of applying epigenetic targets to the treatment of lung cancer, a strikingly potent anti-lung cancer scaffold (compound 6) was pinpointed through a series of in-vitro experiments. Notably, the compounds manifested a magnificent activity profile against KRAS and EGFR mutant lung cancer cell lines (IC50 = 0.80 - 0.96 µM), and the effects were found to be mediated through preferential HDAC6 inhibition (IC50 = 12.9 nM). In addition to HDAC6 inhibition, the compounds also elicited HDAC1 and HDAC3 inhibitory activity with an IC50 value of 49.9 nM and 68.5 nM, respectively. The HDAC inhibitory ability of compound 6 was also confirmed from the results of the western blot experiment that revealed its potential to decrease the expression levels of HDAC isoforms (HDAC1, HDAC3, and HDAC6). Noteworthy to mention that complete downregulation of the HDAC6 isoform was exerted by compound 6 at 0.5 and 1 µM. Moreover, in another western blot experiment, treatment with hydroxamic acid 6 led to upregulation of H3 acK9 and α-Tubulin acK40 levels, ascertaining its inhibitory activity toward both the class I HDACs and Class II B HDACs. The results of other assays were also encouraging as treatment with compound 6 led to the suppression of the colony formation ability of A549 cells, induction of apoptosis, and increase in autophagic flux. In silico studies led us to rationalize the results of the experimental assay, and some key interactions of compound 6 with the amino acid residues of HDAC isoforms were identified. In light of the impressive activity spectrum of compound 6, a pH-responsive nanocarrier (hyaluronic acid-compound 6 nanoparticles) was prepared. The dialysis bag approach was used for the assessment of the nanoparticles under both normal and acidic circumstances, and the pH-sensitive nature of hyaluronic acid-compound 6 nanoparticles was confirmed. Delightfully, the nanoformulation was devoid of cytotoxicity against the L929 mouse fibroblast cells (normal settings) and exhibited selective cytotoxicity towards the A549 lung cancer cell lines. In a nutshell, compound 6 appears to be a promising adduct, and a detailed investigation of this compound might yield a therapeutic for the treatment of lung cancer.

Keywords: HDAC inhibitors, lung cancer, scaffold, hyaluronic acid, nanoparticles

Procedia PDF Downloads 70
130 DSF Elements in High-Rise Timber Buildings

Authors: Miroslav Premrov, Andrej Štrukelj, Erika Kozem Šilih

Abstract:

The utilization of prefabricated timber-wall elements with double glazing, called as double-skin façade element (DSF), represents an innovative structural approach in the context of new high-rise timber construction, simultaneously combining sustainable solutions with improved energy efficiency and living quality. In addition to the minimum energy needs of buildings, the design of modern buildings is also increasingly focused on the optimal indoor comfort, in particular on sufficient natural light indoors. An optimally energy-designed building with an optimal layout of glazed areas around the building envelope represents a great potential in modern timber construction. Usually, all these transparent façade elements, because of energy benefits, are primary asymmetrical oriented and if they are considered as non-resisting against a horizontal load impact, a strong torsion effects in the building can appear. The problem of structural stability against a strong horizontal load impact of such modern timber buildings especially increase in a case of high-rise structures where additional bracing elements have to be used. In such a case, special diagonal bracing systems or other bracing solutions with common timber wall elements have to be incorporated into the structure of the building to satisfy all prescribed resisting requirements given by the standards. However, all such structural solutions are usually not environmentally friendly and also not contribute to an improved living comfort, or they are not accepted by the architects at all. Consequently, it is a special need to develop innovative load-bearing timber-glass wall elements which are in the same time environmentally friendly, can increase internal comfort in the building, but are also load-bearing. The new developed load-bearing DSF elements can be a good answer on all these requirements. Timber-glass façade elements DSF wall elements consist of two transparent layers, thermal-insulated three-layered glass pane on the internal side and an additional single-layered glass pane on the external side of the wall. The both panes are separated by an air channel which can be of any dimensions and can have a significant influence on the thermal insulation or acoustic response of such a wall element. Most already published studies on DSF elements primarily deal only with energy and LCA solutions and do not address any structural problems. In previous studies according to experimental analysis and mathematical modeling it was already presented a possible benefit of such load-bearing DSF elements, especially comparing with previously developed load-bearing single-skin timber wall elements, but they were not applicate yet in any high-rise timber structure. Therefore, in the presented study specially selected 10-storey prefabricated timber building constructed in a cross-laminated timber (CLT) structural wall system is analyzed using the developed DSF elements in a sense to increase a structural lateral stability of the whole building. The results evidently highlight the importance the load-bearing DSF elements, as their incorporation can have a significant impact on the overall behavior of the structure through their influence on the stiffness properties. Taking these considerations into account is crucial to ensure compliance with seismic design codes and to improve the structural resilience of high-rise timber buildings.

Keywords: glass, high-rise buildings, numerical analysis, timber

Procedia PDF Downloads 20
129 Autophagy Promotes Vascular Smooth Muscle Cell Migration in vitro and in vivo

Authors: Changhan Ouyang, Zhonglin Xie

Abstract:

In response to proatherosclerotic factors such as oxidized lipids, or to therapeutic interventions such as angioplasty, stents, or bypass surgery, vascular smooth muscle cells (VSMCs) migrate from the media to the intima, resulting in intimal hyperplasia, restenosis, graft failure, or atherosclerosis. These proatherosclerotic factors also activate autophagy in VSMCs. However, the functional role of autophagy in vascular health and disease remains poorly understood. In the present study, we determined the role of autophagy in the regulation of VSMC migration. Autophagy activity in cultured human aortic smooth muscle cells (HASMCs) and mouse carotid arteries was measured by Western blot analysis of microtubule-associated protein 1 light chain 3 B (LC3B) and P62. The VSMC migration was determined by scratch wound assay and transwell migration assay. Ex vivo smooth muscle cell migration was determined using aortic ring assay. The in vivo SMC migration was examined by staining the carotid artery sections with smooth muscle alpha actin (alpha SMA) after carotid artery ligation. To examine the relationship between autophagy and neointimal hyperplasia, C57BL/6J mice were subjected to carotid artery ligation. Seven days after injury, protein levels of Atg5, Atg7, Beclin1, and LC3B drastically increased and remained higher in the injured arteries three weeks after the injury. In parallel with the activation of autophagy, vascular injury-induced neointimal hyperplasia as estimated by increased intima/media ratio. The en face staining of carotid artery showed that vascular injury enhanced alpha SMA staining in the intimal cells as compared with the sham operation. Treatment of HASMCs with platelet-derived growth factor (PDGF), one of the major factors for vascular remodeling in response to vascular injury, increased Atg7 and LC3 II protein levels and enhanced autophagosome formation. In addition, aortic ring assay demonstrated that PDGF treated aortic rings displayed an increase in neovessel formation compared with control rings. Whole mount staining for CD31 and alpha SMA in PDGF treated neovessels revealed that the neovessel structures were stained by alpha SMA but not CD31. In contrast, pharmacological and genetic suppression of autophagy inhibits VSMC migration. Especially, gene silencing of Atg7 inhibited VSMC migration induced by PDGF. Furthermore, three weeks after ligation, markedly decreased neointimal formation was found in mice treated with chloroquine, an inhibitor of autophagy. Quantitative morphometric analysis of the injured vessels revealed a marked reduction in the intima/media ratio in the mice treated with chloroquine. Conclusion: Autophagy activation increases VSMC migration while autophagy suppression inhibits VSMC migration. These findings suggest that autophagy suppression may be an important therapeutic strategy for atherosclerosis and intimal hyperplasia.

Keywords: autophagy, vascular smooth muscle cell, migration, neointimal formation

Procedia PDF Downloads 286
128 Innovative Technologies of Distant Spectral Temperature Control

Authors: Leonid Zhukov, Dmytro Petrenko

Abstract:

Optical thermometry has no alternative in many cases of industrial most effective continuous temperature control. Classical optical thermometry technologies can be used on available for pyrometers controlled objects with stable radiation characteristics and transmissivity of the intermediate medium. Without using temperature corrections, it is possible in the case of a “black” body for energy pyrometry and the cases of “black” and “grey” bodies for spectral ratio pyrometry or with using corrections – for any colored bodies. Consequently, with increasing the number of operating waves, optical thermometry possibilities to reduce methodical errors significantly expand. That is why, in recent 25-30 years, research works have been reoriented on more perfect spectral (multicolor) thermometry technologies. There are two physical material substances, i.e., substance (controlled object) and electromagnetic field (thermal radiation), to be operated in optical thermometry. Heat is transferred by radiation; therefore, radiation has the energy, entropy, and temperature. Optical thermometry was originating simultaneously with the developing of thermal radiation theory when the concept and the term "radiation temperature" was not used, and therefore concepts and terms "conditional temperatures" or "pseudo temperature" of controlled objects were introduced. They do not correspond to the physical sense and definitions of temperature in thermodynamics, molecular-kinetic theory, and statistical physics. Launched by the scientific thermometric society, discussion about the possibilities of temperature measurements of objects, including colored bodies, using the temperatures of their radiation is not finished. Are the information about controlled objects transferred by their radiation enough for temperature measurements? The positive and negative answers on this fundamental question divided experts into two opposite camps. Recent achievements of spectral thermometry develop events in her favour and don’t leave any hope for skeptics. This article presents the results of investigations and developments in the field of spectral thermometry carried out by the authors in the Department of Thermometry and Physics-Chemical Investigations. The authors have many-year’s of experience in the field of modern optical thermometry technologies. Innovative technologies of optical continuous temperature control have been developed: symmetric-wave, two-color compensative, and based on obtained nonlinearity equation of spectral emissivity distribution linear, two-range, and parabolic. Тhe technologies are based on direct measurements of physically substantiated and proposed by Prof. L. Zhukov, radiation temperatures with the next calculation of the controlled object temperature using this radiation temperatures and corresponding mathematical models. Тhe technologies significantly increase metrological characteristics of continuous contactless and light-guide temperature control in energy, metallurgical, ceramic, glassy, and other productions. For example, under the same conditions, the methodical errors of proposed technologies are less than the errors of known spectral and classical technologies in 2 and 3-13 times, respectively. Innovative technologies provide quality products obtaining at the lowest possible resource-including energy costs. More than 600 publications have been published on the completed developments, including more than 100 domestic patents, as well as 34 patents in Australia, Bulgaria, Germany, France, Canada, the USA, Sweden, and Japan. The developments have been implemented in the enterprises of USA, as well as Western Europe and Asia, including Germany and Japan.

Keywords: emissivity, radiation temperature, object temperature, spectral thermometry

Procedia PDF Downloads 75
127 Magneto-Luminescent Biocompatible Complexes Based on Alloyed Quantum Dots and Superparamagnetic Iron Oxide Nanoparticles

Authors: A. Matiushkina, A. Bazhenova, I. Litvinov, E. Kornilova, A. Dubavik, A. Orlova

Abstract:

Magnetic-luminescent complexes based on superparamagnetic iron oxide nanoparticles (SPIONs) and semiconductor quantum dots (QDs) have been recognized as a new class of materials that have high potential in modern medicine. These materials can serve for theranostics of oncological diseases, and also as a target agent for drug delivery. They combine the qualities characteristic of magnetic nanoparticles, that is, magneto-controllability and the ability to local heating under the influence of an external magnetic field, as well as phosphors, due to luminescence of which, for example, early tumor imaging is possible. The complexity of creating complexes is the energy transfer between particles, which quenches the luminescence of QDs in complexes with SPIONs. In this regard, a relatively new type of alloyed (CdₓZn₁₋ₓSeᵧS₁₋ᵧ)-ZnS QDs is used in our work. The presence of a sufficiently thick gradient semiconductor shell in alloyed QDs makes it possible to reduce the probability of energy transfer from QDs to SPIONs in complexes. At the same time, Forster Resonance Energy Transfer (FRET) is a perfect instrument to confirm the formation of complexes based on QDs and different-type energy acceptors. The formation of complexes in the aprotic bipolar solvent dimethyl sulfoxide is ensured by the coordination of the carboxyl group of the stabilizing QD molecule (L-cysteine) on the surface iron atoms of the SPIONs. An analysis of the photoluminescence (PL) spectra has shown that a sequential increase in the SPIONs concentration in the samples is accompanied by effective quenching of the luminescence of QDs. However, it has not confirmed the formation of complexes yet, because of a decrease in the PL intensity of QDs due to reabsorption of light by SPIONs. Therefore, a study of the PL kinetics of QDs at different SPIONs concentrations was made, which demonstrates that an increase in the SPIONs concentration is accompanied by a symbatic reduction in all characteristic PL decay times. It confirms the FRET from QDs to SPIONs, which indicates the QDs/SPIONs complex formation, rather than a spontaneous aggregation of QDs, which is usually accompanied by a sharp increase in the percentage of the QD fraction with the shortest characteristic PL decay time. The complexes have been studied by the magnetic circular dichroism (MCD) spectroscopy that allows one to estimate the response of magnetic material to the applied magnetic field and also can be useful to check SPIONs aggregation. An analysis of the MCD spectra has shown that the complexes have zero residual magnetization, which is an important factor for using in biomedical applications, and don't contain SPIONs aggregates. Cell penetration, biocompatibility, and stability of QDs/SPIONs complexes in cancer cells have been studied using HeLa cell line. We have found that the complexes penetrate in HeLa cell and don't demonstrate cytotoxic effect up to 25 nM concentration. Our results clearly demonstrate that alloyed (CdₓZn₁₋ₓSeᵧS₁₋ᵧ)-ZnS QDs can be successfully used in complexes with SPIONs reached new hybrid nanostructures, which combine bright luminescence for tumor imaging and magnetic properties for targeted drug delivery and magnetic hyperthermia of tumors. Acknowledgements: This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1080 and was financially supported by Government of Russian Federation, Grant 08-08.

Keywords: alloyed quantum dots, magnetic circular dichroism, magneto-luminescent complexes, superparamagnetic iron oxide nanoparticles

Procedia PDF Downloads 90
126 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 54
125 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 54
124 Academia as Creator of Emerging, Innovative Communities of Practice and Learning

Authors: Francisco Julio Batle Lorente

Abstract:

The present paper aims at presenting a new category of role for academia: proactive creator/promoter of communities of practice in emerging areas of innovation. It is based in research among practitioners in three different areas: social entrepreneurship, alumni engaged in entrepreneurship and innovation, and digital nomads. The concept of CoP is related to an intentionally created space to share experiences and collectively reflect on the cases arising from practice. Such an endeavour is not contemplated in the literature on academic roles in an explicit way. The goal of the paper is providing a framework for this function and throw some light on the perception and priorities of members of emerging communities (78 alumni, 154 social entrepreneurs, and 231 digital nomads) regarding community, learning, engagement, and networking, areas in which the university can help and, by doing so, contributing to signal the emerging area and creating new opportunities for the academia. The research methodology was based in Survey research. It is a specific type of field study that involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire. It was considered that survey research might be valuable to the present project and help outline the utility of various study designs and future projects with the emerging communities that are the object of the investigation. Open questions were used for different topics, as well as critical incident technique. It was used a standard technique for survey sampling and questionnaire design. Finally, it was defined a procedure for pretesting questionnaires and for data collection. The questionnaire was channelled by means of google forms. The results indicate that the members of emerging, innovative CoPs and learning such the ones that were selected for this investigation lack cohesion, inspiration, networking, opportunities for creation of social capital, opportunities for collaboration beyond their existing and close network. The opportunity that arises for the academia from proactively helping articulate CoP (and Communities of learning) are related to key elements of any CoP/ CoL: community construction approaches, technological infrastructure, benefits, participation issues and urgent challenges, trust, networking, technical ability/training/development and collaboration. Beyond training, other three areas (networking, collaboration and urgent challenges) were the ones in which the contribution of universities to the communities were considered more interesting and workable to practitioners. The analysis of the responses for the open questions related to perception of the universities offer options for terra incognita to be explored for universities (signalling new areas, establishing broader collaborations with research, government, media and corporations, attracting investment). Based on the findings from this research, there is some evidence that CoPs can offer a formal and informal method of professional and interprofessional development for member of any emerging and innovative community and can decrease social and professional isolation. The opportunity that it offers to academia can increase the entrepreneurial and engaged university identity. It also moves to academia into a realm of civic confrontation of present and future challenges in a more proactive way.

Keywords: social innovation, new roles of academia, community of learning, community of practice

Procedia PDF Downloads 57
123 Correlation Analysis of Reactivity in the Oxidation of Para and Meta-Substituted Benzyl Alcohols by Benzimidazolium Dichromate in Non-Aqueous Media: A Kinetic and Mechanistic Aspects

Authors: Seema Kothari, Dinesh Panday

Abstract:

An observed correlation of the reaction rates with the changes in the nature of substituent present on one of the reactants often reveals the nature of transition state. Selective oxidation of organic compounds under non-aqueous media is an important transformation in synthetic organic chemistry. Inorganic chromates and dichromates being drastic oxidant and are generally insoluble in most organic solvents, a number of different chromium (VI) derivatives have been synthesized. Benzimidazolium dichromate (BIDC) is one of the recently reported Cr(VI) reagents which is neither hygroscopic nor light sensitive being, therefore, much stable. Not many reports on the kinetics of the oxidations by BIDC are seemed to be available in the literature. In the present investigation, the kinetics and mechanism of benzyl alcohol (BA) and a number of para- and meta-substituted benzyl alcohols by benzimidazolium dichromate (BIDC), in dimethyl sulphoxide, is reported. The reactions were followed spectrophotometrically at 364 nm by monitoring the decrease in [BIDC] for up to 85-90% reaction, the temperature being constant. The observed oxidation product is the corresponding benzaldehyde. The reactions were of first order with respect to each the alcohol and BIDC. The reactions are catalyzed by proton, and the dependence is of the form: kobs = a + b[H+]. The reactions thus follow both, an acid-dependent and acid-independent paths. The oxidation of [1,1 2H2]benzyl alcohol exhibited the presence of a substantial kinetic isotope effect ( kH/kD = 6.20 at 298 K ). This indicated the cleavage of a α-C-H bond in the rate-determining step. An analysis of the temperature dependence of the deuterium isotope effect showed that the loss of hydrogen proceeds through a concerted cyclic process. The rate of oxidation of BA was determined in 19 organic solvents. An analysis of the solvent effect by Swain’s equation indicated that though both the anion and cation-solvating powers of the solvent contribute to the observed solvent effect, the role of cation-solvation is major. The rates of the para and meta compounds, at 298 K, failed to exhibit a significant correlation in terms of Hammett or Brown's substituent constants. The rates were then subjected to analyses in terms of dual substituent parameter (DSP) equations. The rates of oxidation of the para-substituted benzyl alcohols show an excellent correlation with Taft's σI and σRBA values. However, the rates for the meta-substituted benzyl alcohols show an excellent correlation with σI and σR0. The polar reaction constants are negative indicating an electron-deficient transition state. Hence the overall mechanism is proposed to involve the formation of a chromate ester in a fast pre-equilibrium and then a decomposition of the ester in a subsequent slow step via a cyclic concerted symmetrical transition state, involving hydride-ion transfer, leading to the product. The first order dependence on alcohol may be accounted in terms of the small value of the formation constant of the ester intermediate. An another reaction mechanism accounting the acid-catalysis involve the formation of a protonated BIDC prior to formation of an ester intermediate which subsequently decomposes in a slow step leading to the product.

Keywords: benzimidazolium dichromate, benzyl alcohols, correlation analysis, kinetics, oxidation

Procedia PDF Downloads 316
122 Seismic History and Liquefaction Resistance: A Comparative Study of Sites in California

Authors: Tarek Abdoun, Waleed Elsekelly

Abstract:

Introduction: Liquefaction of soils during earthquakes can have significant consequences on the stability of structures and infrastructure. This study focuses on comparing two liquefaction case histories in California, namely the response of the Wildlife site in the Imperial Valley to the 2010 El-Mayor Cucapah earthquake (Mw = 7.2, amax = 0.15g) and the response of the Treasure Island Fire Station (F.S.) site in the San Francisco Bay area to the 1989 Loma Prieta Earthquake (Mw = 6.9, amax = 0.16g). Both case histories involve liquefiable layers of silty sand with non-plastic fines, similar shear wave velocities, low CPT cone penetration resistances, and groundwater tables at similar depths. The liquefaction charts based on shear wave velocity field predict liquefaction at both sites. However, a significant difference arises in their pore pressure responses during the earthquakes. The Wildlife site did not experience liquefaction, as evidenced by piezometer data, while the Treasure Island F.S. site did liquefy during the shaking. Objective: The primary objective of this study is to investigate and understand the reason for the contrasting pore pressure responses observed at the Wildlife site and the Treasure Island F.S. site despite their similar geological characteristics and predicted liquefaction potential. By conducting a detailed analysis of similarities and differences between the two case histories, the objective is to identify the factors that contributed to the higher liquefaction resistance exhibited by the Wildlife site. Methodology: To achieve this objective, the geological and seismic data available for both sites were gathered and analyzed. Then their soil profiles, seismic characteristics, and liquefaction potential as predicted by shear wave velocity-based liquefaction charts were analyzed. Furthermore, the seismic histories of both regions were examined. The number of previous earthquakes capable of generating significant excess pore pressures for each critical layer was assessed. This analysis involved estimating the total seismic activity that the Wildlife and Treasure Island F.S. critical layers experienced over time. In addition to historical data, centrifuge and large-scale experiments were conducted to explore the impact of prior seismic activity on liquefaction resistance. These findings served as supporting evidence for the investigation. Conclusions: The higher liquefaction resistance observed at the Wildlife site and other sites in the Imperial Valley can be attributed to preshaking by previous earthquakes. The Wildlife critical layer was subjected to a substantially greater number of seismic events capable of generating significant excess pore pressures over time compared to the Treasure Island F.S. layer. This crucial disparity arises from the difference in seismic activity between the two regions in the past century. In conclusion, this research sheds light on the complex interplay between geological characteristics, seismic history, and liquefaction behavior. It emphasizes the significant impact of past seismic activity on liquefaction resistance and can provide valuable insights for evaluating the stability of sandy sites in other seismic regions.

Keywords: liquefaction, case histories, centrifuge, preshaking

Procedia PDF Downloads 51
121 Cultural Cognition and Voting: Understanding Values and Perceived Risks in the Colombian Population

Authors: Andrea N. Alarcon, Julian D. Castro, Gloria C. Rojas, Paola A. Vaca, Santiago Ortiz, Gustavo Martinez, Pablo D. Lemoine

Abstract:

Recently, electoral results across many countries have shown to be inconsistent with rational decision theory, which states that individuals make decisions based on maximizing benefits and reducing risks. An alternative explanation has emerged: Fear and rage-driven vote have been proved to be highly effective for political persuasion and mobilization. This phenomenon has been evident in the 2016 elections in the United States, 2006 elections in Mexico, 1998 elections in Venezuela, and 2004 elections in Bolivia. In Colombia, it has occurred recently in the 2016 plebiscite for peace and 2018 presidential elections. The aim of this study is to explain this phenomenon using cultural cognition theory, referring to the psychological predisposition individuals have to believe that its own and its peer´s behavior is correct and, therefore, beneficial to the entire society. Cultural cognition refers to the tendency of individuals to fit perceived risks, and factual beliefs into group shared values; the Cultural Cognition Worldview Scales (CCWS) measures cultural perceptions through two different dimensions: Individualism-communitarianism and hierarchy-egalitarianism. The former refers to attitudes towards social dominance based on conspicuous and static characteristics (sex, ethnicity or social class), while the latter refers to attitudes towards a social ordering in which it is expected from individuals to guarantee their own wellbeing without society´s or government´s intervention. A probabilistic national sample was obtained from different polls from the consulting and public opinion company Centro Nacional de Consultoría. Sociodemographic data was obtained along with CCWS scores, a subjective measure of left-right ideological placement and vote intention for 2019 Mayor´s elections were also included in the questionnaires. Finally, the question “In your opinion, what is the greatest risk Colombia is facing right now?” was included to identify perceived risk in the population. Preliminary results show that Colombians are highly distributed among hierarchical communitarians and egalitarian individualists (30.9% and 31.7%, respectively), and to a less extent among hierarchical individualists and egalitarian communitarians (19% and 18.4%, respectively). Males tended to be more hierarchical (p < .000) and communitarian (p=.009) than females. ANOVA´s revealed statistically significant differences between groups (quadrants) for the level of schooling, left-right ideological orientation, and stratum (p < .000 for all), and proportion differences revealed statistically significant differences for groups of age (p < .001). Differences and distributions for vote intention and perceived risks are still being processed and results are yet to be analyzed. Results show that Colombians are differentially distributed among quadrants in regard to sociodemographic data and left-right ideological orientation. These preliminary results indicate that this study may shed some light on why Colombians vote the way they do, and future qualitative data will show the fears emerging from the identified values in the CCWS and the relation this has with vote intention.

Keywords: communitarianism, cultural cognition, egalitarianism, hierarchy, individualism, perceived risks

Procedia PDF Downloads 121
120 Effect of Amiodarone on the Thyroid Gland of Adult Male Albino Rat and the Possible Protective Role of Vitamin E Supplementation: A Histological and Ultrastructural Study

Authors: Ibrahim Abdulla Labib, Medhat Mohamed Morsy, Gamal Hosny, Hanan Dawood Yassa, Gaber Hassan

Abstract:

Amiodarone is a very effective drug, widely used for arrhythmia. Unfortunately it has many side effects involving many organs especially thyroid gland. The current work was conducted to elucidate the effect of amiodarone on the thyroid gland and the possible protective role of vitamin E. Fifty adult male albino rats weighed 200 – 250 grams were divided into five groups; ten rats each. Group I (Control): Five rats were sacrificed after three weeks and five rats were sacrificed after six weeks. Group II (Sham control): Each rat received sunflower oil orally; the solvent of vitamin E for three weeks. Group III (Amiodarone-treated): each rat received an oral dose of amiodarone; 150 mg/kg/day for three weeks. Group IV (Recovery): Each rat received amiodarone as group III then the drug was stopped for three weeks to evaluate recovery. Group V (Amiodarone + Vitamin E-treated): Each rat received amiodarone as group III followed by 100 mg/kg/day vitamin E orally for three weeks. Thyroid gland of the sacrificed animals were dissected out and prepared for light and electron microscopic studies. Amiodarone administration resulted in loss of normal follicular architecture as many follicles appeared either shrunken, empty or contained scanty pale colloid. Some follicles appeared lined by more than one layer of cells while others showed interruption of their membrane. Masson's Trichrome stained sections showed increased collagen fibers in between the thyroid follicles. Ultrastructurally, the apical border of the follicular cells showed few irregular detached microvilli. The nuclei of the follicular cells were almost irregular with chromatin condensation. The cytoplasm of most follicular cells revealed numerous dilated rough endoplasmic reticulum with numerous lysosomes. After three weeks of stopping amiodarone, the follicles were nearly regular in outline. Some follicles were filled with homogenous eosinophilic colloid and others had shrunken pale colloid or were empty. Some few follicles showed exfoliated cells in their lumina and others were still lined by more than one layer of follicular cells. Moderate amounts of collagen fibers were observed in-between thyroid follicles. Ultrastructurally, many follicular cells had rounded euchromatic nucleui, moderate number of lysosomes and moderately dilated rough endoplasmic reticulum. However, few follicular cells still showing irregular nucleui, dilated rough endoplasmic reticulum and many cytoplasmic vacuoles. Administration of vitamin E with amiodarone for three weeks resulted in obvious structural improvement. Most of the follicles were lined by a single layer of cuboidal cells and the lumina were filled with homogenous eosinophilic colloid with very few vacuolations. The majority of follicular cells had rounded nuclei with occasional detection of ballooned cells and dark nuclei. Scanty collagen fibers were detected among thyroid follicles. Ultrastructurally, most follicular cells exhibited rounded euchromatic nuclei with few short microvilli were projecting into the colloid. Few lysosomes were also noticed. It was concluded that amiodarone administration leads to many adverse histological changes in the thyroid gland. Some of these changes are reversible during the recovery period however concomitant vitamin E administration with amiodarone has a major protective role in preventing many of these changes.

Keywords: amiodarone, recovery, ultrastructure, vitamin E.

Procedia PDF Downloads 328
119 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 204
118 New Territories: Materiality and Craft from Natural Systems to Digital Experiments

Authors: Carla Aramouny

Abstract:

Digital fabrication, between advancements in software and machinery, is pushing practice today towards more complexity in design, allowing for unparalleled explorations. It is giving designers the immediate capacity to apply their imagined objects into physical results. Yet at no time have questions of material knowledge become more relevant and crucial, as technological advancements approach a radical re-invention of the design process. As more and more designers look towards tactile crafts for material know-how, an interest in natural behaviors has also emerged trying to embed intelligence from nature into the designed objects. Concerned with enhancing their immediate environment, designers today are pushing the boundaries of design by bringing in natural systems, materiality, and advanced fabrication as essential processes to produce active designs. New Territories, a yearly architecture and design course on digital design and materiality, allows students to explore processes of digital fabrication in intersection with natural systems and hands-on experiments. This paper will highlight the importance of learning from nature and from physical materiality in a digital design process, and how the simultaneous move between the digital and physical realms has become an essential design method. It will detail the work done over the course of three years, on themes of natural systems, crafts, concrete plasticity, and active composite materials. The aim throughout the course is to explore the design of products and active systems, be it modular facades, intelligent cladding, or adaptable seating, by embedding current digital technologies with an understanding of natural systems and a physical know-how of material behavior. From this aim, three main themes of inquiry have emerged through the varied explorations across the three years, each one approaching materiality and digital technologies through a different lens. The first theme involves crossing the study of naturals systems as precedents for intelligent formal assemblies with traditional crafts methods. The students worked on designing performative facade systems, starting from the study of relevant natural systems and a specific craft, and then using parametric modeling to develop their modular facades. The second theme looks at the cross of craft and digital technologies through form-finding techniques and elastic material properties, bringing in flexible formwork into the digital fabrication process. Students explored concrete plasticity and behaviors with natural references, as they worked on the design of an exterior seating installation using lightweight concrete composites and complex casting methods. The third theme brings in bio-composite material properties with additive fabrication and environmental concerns to create performative cladding systems. Students experimented in concrete composites materials, biomaterials and clay 3D printing to produce different cladding and tiling prototypes that actively enhance their immediate environment. This paper thus will detail the work process done by the students under these three themes of inquiry, describing their material experimentation, digital and analog design methodologies, and their final results. It aims to shed light on the persisting importance of material knowledge as it intersects with advanced digital fabrication and the significance of learning from natural systems and biological properties to embed an active performance in today’s design process.

Keywords: digital fabrication, design and craft, materiality, natural systems

Procedia PDF Downloads 107
117 Continued usage of Wearable FItness Technology: An Extended UTAUT2 Model Perspective

Authors: Rasha Elsawy

Abstract:

Aside from the rapid growth of global information technology and the Internet, another key trend is the swift proliferation of wearable technologies. The future of wearable technologies is very bright as an emerging revolution in this technological world. Beyond this, individual continuance intention toward IT is an important area that drew academics' and practitioners' attention. The literature review exhibits that continuance usage is an important concern that needs to be addressed for any technology to be advantageous and for consumers to succeed. However, consumers noticeably abandon their wearable devices soon after purchase, losing all subsequent benefits that can only be achieved through continued usage. Purpose-This thesis aims to develop an integrated model designed to explain and predict consumers' behavioural intention(BI) and continued use (CU) of wearable fitness technology (WFT) to identify the determinants of the CU of technology. Because of this, the question arises as to whether there are differences between technology adoption and post-adoption (CU) factors. Design/methodology/approach- The study employs the unified theory of acceptance and use of technology2 (UTAUT2), which has the best explanatory power, as an underpinning framework—extending it with further factors, along with user-specific personal characteristics as moderators. All items will be adapted from previous literature and slightly modified according to the WFT/SW context. A longitudinal investigation will be carried out to examine the research model, wherein a survey will include these constructs involved in the conceptual model. A quantitative approach based on a questionnaire survey will collect data from existing wearable technology users. Data will be analysed using the structural equation modelling (SEM) method based on IBM SPSS statistics and AMOS 28.0. Findings- The research findings will provide unique perspectives on user behaviour, intention, and actual continuance usage when accepting WFT. Originality/value- Unlike previous works, the current thesis comprehensively explores factors that affect consumers' decisions to continue using wearable technology. That is influenced by technological/utilitarian, affective, emotional, psychological, and social factors, along with the role of proposed moderators. That novel research framework is proposed by extending the UTAUT2 model with additional contextual variables classified into Performance Expectancy, Effort Expectancy, Social Influence (societal pressure regarding body image), Facilitating Conditions, Hedonic Motivation (to be split up into two concepts: perceived enjoyment and perceived device annoyance), Price value, and Habit-forming techniques; adding technology upgradability as determinants of consumers' behavioural intention and continuance usage of Information Technology (IT). Further, using personality traits theory and proposing relevant user-specific personal characteristics (openness to technological innovativeness, conscientiousness in health, extraversion, neuroticism, and agreeableness) to moderate the research model. Thus, the present thesis obtains a more convincing explanation expected to provide theoretical foundations for future emerging IT (such as wearable fitness devices) research from a behavioural perspective.

Keywords: wearable technology, wearable fitness devices/smartwatches, continuance use, behavioural intention, upgradability, longitudinal study

Procedia PDF Downloads 83
116 Comparing Perceived Restorativeness in Natural and Urban Environment: A Meta-Analysis

Authors: Elisa Menardo, Margherita Pasini, Margherita Brondino

Abstract:

A growing body of empirical research from different areas of inquiry suggests that brief contact with natural environment restore mental resources. The Attention Restoration Theory (ART) is the widespread used and empirical founded theory developed to explain why exposure to nature helps people to recovery cognitive resources. It assumes that contact with nature allows people to free (and then recovery) voluntary attention resources and thus allows them to recover from a cognitive fatigue situation. However, it was suggested that some people could have more cognitive benefit after exposure to urban environment. The objective of this study is to report the results of a meta-analysis on studies (peer-reviewed articles) comparing the restorativeness (the quality to be restorative) perceived in natural environments than those perceived in urban environments. This meta-analysis intended to estimate how much nature environments (forests, parks, boulevards) are perceived to be more restorativeness than urban ones (i.e., the magnitude of the perceived restorativeness’ difference). Moreover, given the methodological difference between study, it studied the potential role of moderator variables as participants (student or other), instrument used (Perceived Restorativeness Scale or other), and procedure (in laboratory or in situ). PsycINFO, PsycARTICLES, Scopus, SpringerLINK, Web of Science online database were used to identify all peer-review articles on restorativeness published to date (k = 167). Reference sections of obtained papers were examined for additional studies. Only 22 independent studies (with a total of 1371 participants) met inclusion criteria (direct exposure to environment, comparison between one outdoor environment with natural element and one without natural element, and restorativeness measured by self-report scale) and were included in meta-analysis. To estimate the average effect size, a random effect model (Restricted Maximum-likelihood estimator) was used because the studies included in the meta-analysis were conducted independently and using different methods in different populations, so no common effect-size was expected. The presence of publication bias was checked using trim and fill approach. Univariate moderator analysis (mixed effect model) were run to determine whether the variable coded moderated the perceived restorativeness difference. Results show that natural environments are perceived to be more restorativeness than urban environments, confirming from an empirical point of view what is now considered a knowledge gained in environmental psychology. The relevant information emerging from this study is the magnitude of the estimated average effect size, which is particularly high (d = 1.99) compared to those that are commonly observed in psychology. Significant heterogeneity between study was found (Q(19) = 503.16, p < 0.001;) and studies’ variability was very high (I2[C.I.] = 96.97% [94.61 - 98.62]). Subsequent univariate moderator analyses were not significant. Methodological difference (participants, instrument, and procedure) did not explain variability between study. Other methodological difference (e.g., research design, environment’s characteristics, light’s condition) could explain this variability between study. In the mine while, studies’ variability could be not due to methodological difference but to individual difference (age, gender, education level) and characteristics (connection to nature, environmental attitude). Furthers moderator analysis are working in progress.

Keywords: meta-analysis, natural environments, perceived restorativeness, urban environments

Procedia PDF Downloads 147
115 Understanding Beginning Writers' Narrative Writing with a Multidimensional Assessment Approach

Authors: Huijing Wen, Daibao Guo

Abstract:

Writing is thought to be the most complex facet of language arts. Assessing writing is difficult and subjective, and there are few scientifically validated assessments exist. Research has proposed evaluating writing using a multidimensional approach, including both qualitative and quantitative measures of handwriting, spelling and prose. Given that narrative writing has historically been a staple of literacy instruction in primary grades and is one of the three major genres Common Core State Standards required students to acquire starting in kindergarten, it is essential for teachers to understand how to measure beginning writers writing development and sources of writing difficulties through narrative writing. Guided by the theoretical models of early written expression and using empirical data, this study examines ways teachers can enact a comprehensive approach to understanding beginning writer’s narrative writing through three writing rubrics developed for a Curriculum-based Measurement (CBM). The goal is to help classroom teachers structure a framework for assessing early writing in primary classrooms. Participants in this study included 380 first-grade students from 50 classrooms in 13 schools in three school districts in a Mid-Atlantic state. Three writing tests were used to assess first graders’ writing skills in relation to both transcription (i.e., handwriting fluency and spelling tests) and translational skills (i.e., a narrative prompt). First graders were asked to respond to a narrative prompt in 20 minutes. Grounded in theoretical models of earlier expression and empirical evidence of key contributors to early writing, all written samples to the narrative prompt were coded three ways for different dimensions of writing: length, quality, and genre elements. To measure the quality of the narrative writing, a traditional holistic rating rubric was developed by the researchers based on the CCSS and the general traits of good writing. Students' genre knowledge was measured by using a separate analytic rubric for narrative writing. Findings showed that first-graders had emerging and limited transcriptional and translational skills with a nascent knowledge of genre conventions. The findings of the study provided support for the Not-So-Simple View of Writing in that fluent written expression, measured by length and other important linguistic resources measured by the overall quality and genre knowledge rubrics, are fundamental in early writing development. Our study echoed previous research findings on children's narrative development. The study has practical classroom application as it informs writing instruction and assessment. It offered practical guidelines for classroom instruction by providing teachers with a better understanding of first graders' narrative writing skills and knowledge of genre conventions. Understanding students’ narrative writing provides teachers with more insights into specific strategies students might use during writing and their understanding of good narrative writing. Additionally, it is important for teachers to differentiate writing instruction given the individual differences shown by our multiple writing measures. Overall, the study shed light on beginning writers’ narrative writing, indicating the complexity of early writing development.

Keywords: writing assessment, early writing, beginning writers, transcriptional skills, translational skills, primary grades, simple view of writing, writing rubrics, curriculum-based measurement

Procedia PDF Downloads 47
114 Content Analysis of Gucci’s ‘Blackface’ Sweater Controversy across Multiple Media Platforms

Authors: John Mark King

Abstract:

Beginning on Feb. 7, 2019, the luxury brand, Gucci, was met with a firestorm on social media over fashion runway images of its black balaclava sweater, which covered the bottom half of the face and featured large, shiny bright red lips surrounding the mouth cutout. Many observers on social media and in the news media noted the garment resembled racist “blackface.” This study aimed to measure how items were framed across multiple media platforms. The unit of analysis was any headline or lead paragraph published using the search terms “Gucci” and “sweater” or “jumper” or “balaclava” during the one-year timeframe of Feb. 7, 2019, to Feb. 6, 2020. Limitations included headlines and lead paragraphs published in English and indexed in the Lexis/Nexis database. Independent variables were the nation in which the item was published and the platform (newspapers, blogs, web-based publications, newswires, magazines, or broadcast news). Dependent variables were tone toward Gucci (negative, neutral or positive) and frame (blackface/racism/racist, boycott/celebrity boycott, sweater/balaclava/jumper/fashion, apology/pulling the product/diversity initiatives by Gucci or frames unrelated to the controversy but still involving Gucci sweaters) and word count. Two coders achieved 100% agreement on all variables except tone (94.2%) and frame (96.3%). The search yielded 276 items published from 155 sources in 18 nations. The tone toward Gucci during this period was negative (69.9%). Items that were neutral (16.3%) or positive (13.8%) toward the brand were overwhelmingly related to items about other Gucci sweaters worn by celebrities or fashion reviews of other Gucci sweaters. The most frequent frame was apology/pulling the product/diversity initiatives by Gucci (35.5%). The tone was most frequently negative across all continents, including the Middle East (83.3% negative), Asia (81.8%), North America (76.6%), Australia/New Zealand (66.7%), and Europe (59.8%). Newspapers/magazines/newswires/broadcast news transcripts (72.4%) were more negative than blogs/web-based publications (63.6%). The most frequent frames used by newspapers/magazines/newswires/broadcast news transcripts were apology/pulling the product/diversity initiatives by Gucci (38.7%) and blackface/racism/racist (26.1%). Blogs/web-based publications most frequently used frames unrelated to the controversial garment, but about other Gucci sweaters (42.9%) and apology/pulling the product/diversity initiatives by Gucci (27.3%). Sources in Western nations (34.7%) and Eastern nations (47.1%) most frequently used the frame of apology/pulling the product/diversity initiatives by Gucci. Mean word count was higher for negative items (583.58) than positive items (404.76). Items framed as blackface/racism/racist or boycott/celebrity boycott had higher mean word count (668.97) than items framed as sweater/balaclava/jumper/fashion or apology/pulling the product/diversity initiatives by Gucci (498.22). The author concluded that during the year-long period, Gucci’s image was likely damaged by the release of the garment at the center of the controversy due to near-universally negative items published, but Gucci’s apology/pulling the product off the market/diversity initiatives by Gucci and items about other Gucci sweaters worn by celebrities or fashion reviews of other Gucci sweaters were the most common frames across multiple media platforms, which may have mitigated the damage to the brand.

Keywords: Blackface, branding, Gucci, media framing

Procedia PDF Downloads 123
113 Identification of Tangible and Intangible Heritage and Preparation of Conservation Proposal for the Historic City of Karanja Laad

Authors: Prachi Buche Marathe

Abstract:

Karanja Laad is a city located in the Vidarbha region in the state of Maharashtra, India. It has a huge amount of tangible and intangible heritage in the form of monuments, precincts, a group of structures, festivals and procession route, which is neglected and lost with time. Three different religions Hinduism, Islam and Jainism along with associations of being a birthplace of Swami Nrusinha Saraswati, an exponent of Datta Sampradaya sect and the British colonial layer have shaped the culture and society of the place over the period. The architecture of the town Karanja Laad has enhanced its unique historic and cultural value with a combination of all these historic layers. Karanja Laad is also a traditional trading historic town with unique hybrid architectural style and has a good potential for developing as a tourist place along with the present image of a pilgrim destination of Datta Sampradaya. The aim of the research is to prepare a conservation proposal for the historic town along with the management framework. Objectives of the research are to study the evolution of Karanja town, to identify the cultural resources along with issues of the historic core of the city, to understand Datta sampradaya, and contribution of Saint Nrusinha Saraswati in the religious sect and his association as an important personality with Karanja. The methodology of the research is site visits to the Karanja city, making field surveys for documentation and discussions and questionnaires with the residents to establish heritage and identify potential and issues within the historic core thereby establishing a case for conservation. Field surveys are conducted for town level study of land use, open spaces, occupancy, ownership, traditional commodity and community, infrastructure, streetscapes, and precinct activities during the festival and non-festival period. Building level study includes establishing various typologies like residential, institutional commercial, religious, and traditional infrastructure from the mythological references like waterbodies (kund), lake and wells. One of the main issues is that the loss of the traditional footprint as well as the traditional open spaces which are getting lost due to the new illegal encroachments and lack of guidelines for the new additions to conserve the original fabric of the structures. Traditional commodities are getting lost since there is no promotion of these skills like pottery and painting. Lavish bungalows like Kannava mansion, main temple Wada (birthplace of the saint) have a huge potential to be developed as a museum by adaptive re-use which will, in turn, attract many visitors during festivals which will boost the economy. Festival procession routes can be identified and a heritage walk can be developed so as to highlight the traditional features of the town. Overall study has resulted in establishing a heritage map with 137 heritage structures identified as potential. Conservation proposal is worked out on the town level, precinct level and building level with interventions such as developing construction guidelines for further development and establishing a heritage cell consisting architects and engineers for the upliftment of the existing rich heritage of the Karanja city.

Keywords: built heritage, conservation, Datta Sampradaya, Karanja Laad, Swami Nrusinha Saraswati, procession route

Procedia PDF Downloads 133
112 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time

Authors: Deepak Loura

Abstract:

The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.

Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture

Procedia PDF Downloads 54
111 Exploring the Ethics and Impact of Slum Tourism in Kenya: A Critical Examination on the Ethical Implications, Legalities and Beneficiaries of This Trade and Long-Term Implications to the Slum Communities

Authors: Joanne Ndirangu

Abstract:

Delving into the intricate landscape of slum tourism in Kenya, this study critically evaluates its ethical implications, legal frameworks, and beneficiaries. By examining the complex interplay between tourism operators, visitors, and slum residents, it seeks to uncover the long-term consequences for the communities involved. Through an exploration of ethical considerations, legal parameters, and the distribution of benefits, this examination aims to shed light on the broader socio-economic impacts of slum tourism in Kenya, particularly on the lives of those residing in these marginalized communities. Assessing the ethical considerations surrounding slum tourism in Kenya, including the potential exploitation of residents and cultural sensitivities and examine the legal frameworks governing slum tourism in Kenya and evaluate their effectiveness in protecting the rights and well-being of slum dwellers. Identifying the primary beneficiaries of slum tourism in Kenya, including tour operators, local businesses, and residents, and analysing the distribution of economic benefits. Exploring the long-term socio-economic impacts of slum tourism on the lives of residents, including changes in living conditions, access to resources, and community development. Understanding the motivations and perceptions of tourists participating in slum tourism in Kenya and assess their role in shaping the industry's dynamics and investigate the potential for sustainable and responsible forms of slum tourism that prioritize community empowerment, cultural exchange, and mutual respect. Providing recommendations for policymakers, tourism stakeholders, and community organizations to promote ethical and sustainable practices in slum tourism in Kenya. The main contributions of researching slum tourism in Kenya would include; Ethical Awareness: By critically examining the ethical implications of slum tourism, the research can raise awareness among tourists, operators, and policymakers about the potential exploitation of marginalized communities. Beneficiary Analysis: By identifying the primary beneficiaries of slum tourism, the research can inform discussions on fair distribution of economic benefits and potential strategies for ensuring that local communities derive meaningful advantages from tourism activities. Socio-Economic Understanding: By exploring the long-term socio-economic impacts of slum tourism, the research can deepen understanding of how tourism activities affect the lives of slum residents, potentially informing policies and initiatives aimed at improving living conditions and promoting community development. Tourist Perspectives: Understanding the motivations and perceptions of tourists participating in slum tourism can provide valuable insights into consumer behaviour and preferences, informing the development of responsible tourism practices and marketing strategies. Promotion of Responsible Tourism: By providing recommendations for promoting ethical and sustainable practices in slum tourism, the research can contribute to the development of guidelines and initiatives aimed at fostering responsible tourism and minimizing negative impacts on host communities. Overall, the research can contribute to a more comprehensive understanding of slum tourism in Kenya and its broader implications, while also offering practical recommendations for promoting ethical and sustainable tourism practices.

Keywords: slum tourism, dark tourism, ethical tourism, responsible tourism

Procedia PDF Downloads 28
110 Developing and Testing a Questionnaire of Music Memorization and Practice

Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos

Abstract:

Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.

Keywords: cross-cultural, memorization, practice, questionnaire, young musicians

Procedia PDF Downloads 103
109 Bacterial Exposure and Microbial Activity in Dental Clinics during Cleaning Procedures

Authors: Atin Adhikari, Sushma Kurella, Pratik Banerjee, Nabanita Mukherjee, Yamini M. Chandana Gollapudi, Bushra Shah

Abstract:

Different sharp instruments, drilling machines, and high speed rotary instruments are routinely used in dental clinics during dental cleaning. Therefore, these cleaning procedures release a lot of oral microorganisms including bacteria in clinic air and may cause significant occupational bioaerosol exposure risks for dentists, dental hygienists, patients, and dental clinic employees. Two major goals of this study were to quantify volumetric airborne concentrations of bacteria and to assess overall microbial activity in this type of occupational environment. The study was conducted in several dental clinics of southern Georgia and 15 dental cleaning procedures were targeted for sampling of airborne bacteria and testing of overall microbial activity in settled dusts over clinic floors. For air sampling, a Biostage viable cascade impactor was utilized, which comprises an inlet cone, precision-drilled 400-hole impactor stage, and a base that holds an agar plate (Tryptic soy agar). A high-flow Quick-Take-30 pump connected to this impactor pulls microorganisms in air at 28.3 L/min flow rate through the holes (jets) where they are collected on the agar surface for approx. five minutes. After sampling, agar plates containing the samples were placed in an ice chest with blue ice and plates were incubated at 30±2°C for 24 to 72 h. Colonies were counted and converted to airborne concentrations (CFU/m3) followed by positive hole corrections. Most abundant bacterial colonies (selected by visual screening) were identified by PCR amplicon sequencing of 16S rRNA genes. For understanding overall microbial activity in clinic floors and estimating a general cleanliness of the clinic surfaces during or after dental cleaning procedures, ATP levels were determined in swabbed dust samples collected from 10 cm2 floor surfaces. Concentration of ATP may indicate both the cell viability and the metabolic status of settled microorganisms in this situation. An ATP measuring kit was used, which utilized standard luciferin-luciferase fluorescence reaction and a luminometer, which quantified ATP levels as relative light units (RLU). Three air and dust samples were collected during each cleaning procedure (at the beginning, during cleaning, and immediately after the procedure was completed (n = 45). Concentrations at the beginning, during, and after dental cleaning procedures were 671±525, 917±1203, and 899±823 CFU/m3, respectively for airborne bacteria and 91±101, 243±129, and 139±77 RLU/sample, respectively for ATP levels. The concentrations of bacteria were significantly higher than typical indoor residential environments. Although an increasing trend for airborne bacteria was observed during cleaning, the data collected at three different time points were not significantly different (ANOVA: p = 0.38) probably due to high standard deviations of data. The ATP levels, however, demonstrated a significant difference (ANOVA: p <0.05) in this scenario indicating significant change in microbial activity on floor surfaces during dental cleaning. The most common bacterial genera identified were: Neisseria sp., Streptococcus sp., Chryseobacterium sp., Paenisporosarcina sp., and Vibrio sp. in terms of frequencies of occurrences, respectively. The study concluded that bacterial exposure in dental clinics could be a notable occupational biohazard, and appropriate respiratory protections for the employees are urgently needed.

Keywords: bioaerosols, hospital hygiene, indoor air quality, occupational biohazards

Procedia PDF Downloads 291