Search results for: quality of environment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16702

Search results for: quality of environment

262 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 50
261 Analysis of Potential Associations of Single Nucleotide Polymorphisms in Patients with Schizophrenia Spectrum Disorders

Authors: Tatiana Butkova, Nikolai Kibrik, Kristina Malsagova, Alexander Izotov, Alexander Stepanov, Anna Kaysheva

Abstract:

Relevance. The genetic risk of developing schizophrenia is determined by two factors: single nucleotide polymorphisms and gene copy number variations. The search for serological markers for early diagnosis of schizophrenia is driven by the fact that the first five years of the disease are accompanied by significant biological, psychological, and social changes. It is during this period that pathological processes are most amenable to correction. The aim of this study was to analyze single nucleotide polymorphisms (SNPs) that are hypothesized to potentially influence the onset and development of the endogenous process. Materials and Methods It was analyzed 73 single nucleotide polymorphism variants. The study included 48 patients undergoing inpatient treatment at "Psychiatric Clinical Hospital No. 1" in Moscow, comprising 23 females and 25 males. Inclusion criteria: - Patients aged 18 and above. - Diagnosis according to ICD-10: F20.0, F20.2, F20.8, F21.8, F25.1, F25.2. - Voluntary informed consent from patients. Exclusion criteria included: - The presence of concurrent somatic or neurological pathology, neuroinfections, epilepsy, organic central nervous system damage of any etiology, and regular use of medication. - Substance abuse and alcohol dependence. - Women who were pregnant or breastfeeding. Clinical and psychopathological assessment was complemented by psychometric evaluation using the PANSS scale at the beginning and end of treatment. The duration of observation during therapy was 4-6 weeks. Total DNA extraction was performed using QIAamp DNA. Blood samples were processed on Illumina HiScan and genotyped for 652,297 markers on the Infinium Global Chips Screening Array-24v2.0 using the IMPUTE2 program with parameters Ne=20,000 and k=90. Additional filtration was performed based on INFO>0.5 and genotype probability>0.5. Quality control of the obtained DNA was conducted using agarose gel electrophoresis, with each tested sample having a volume of 100 µL. Results. It was observed that several SNPs exhibited gender dependence. We identified groups of single nucleotide polymorphisms with a membership of 80% or more in either the female or male gender. These SNPs included rs2661319, rs2842030, rs4606, rs11868035, rs518147, rs5993883, and rs6269.Another noteworthy finding was the limited combination of SNPs sufficient to manifest clinical symptoms leading to hospitalization. Among all 48 patients, each of whom was analyzed for deviations in 73 SNPs, it was discovered that the combination of involved SNPs in the manifestation of pronounced clinical symptoms of schizophrenia was 19±3 out of 73 possible. In study, the frequency of occurrence of single nucleotide polymorphisms also varied. The most frequently observed SNPs were rs4849127 (in 90% of cases), rs1150226 (86%), rs1414334 (75%), rs10170310 (73%), rs2857657, and rs4436578 (71%). Conclusion. Thus, the results of this study provide additional evidence that these genes may be associated with the development of schizophrenia spectrum disorders. However, it's impossible cannot rule out the hypothesis that these polymorphisms may be in linkage disequilibrium with other functionally significant polymorphisms that may actually be involved in schizophrenia spectrum disorders. It has been shown that missense SNPs by themselves are likely not causative of the disease but are in strong linkage disequilibrium with non-functional SNPs that may indeed contribute to disease predisposition.

Keywords: gene polymorphisms, genotyping, single nucleotide polymorphisms, schizophrenia.

Procedia PDF Downloads 45
260 Interactive Virtual Patient Simulation Enhances Pharmacology Education and Clinical Practice

Authors: Lyndsee Baumann-Birkbeck, Sohil A. Khan, Shailendra Anoopkumar-Dukie, Gary D. Grant

Abstract:

Technology-enhanced education tools are being rapidly integrated into health programs globally. These tools provide an interactive platform for students and can be used to deliver topics in various modes including games and simulations. Simulations are of particular interest to healthcare education, where they are employed to enhance clinical knowledge and help to bridge the gap between theory and practice. Simulations will often assess competencies for practical tasks, yet limited research examines the effects of simulation on student perceptions of their learning. The aim of this study was to determine the effects of an interactive virtual patient simulation for pharmacology education and clinical practice on student knowledge, skills and confidence. Ethics approval for the study was obtained from Griffith University Research Ethics Committee (PHM/11/14/HREC). The simulation was intended to replicate the pharmacy environment and patient interaction. The content was designed to enhance knowledge of proton-pump inhibitor pharmacology, role in therapeutics and safe supply to patients. The tool was deployed into a third-year clinical pharmacology and therapeutics course. A number of core practice areas were examined including the competency domains of questioning, counselling, referral and product provision. Baseline measures of student self-reported knowledge, skills and confidence were taken prior to the simulation using a specifically designed questionnaire. A more extensive questionnaire was deployed following the virtual patient simulation, which also included measures of student engagement with the activity. A quiz assessing student factual and conceptual knowledge of proton-pump inhibitor pharmacology and related counselling information was also included in both questionnaires. Sixty-one students (response rate >95%) from two cohorts (2014 and 2015) participated in the study. Chi-square analyses were performed and data analysed using Fishers exact test. Results demonstrate that student knowledge, skills and confidence within the competency domains of questioning, counselling, referral and product provision, show improvement following the implementation of the virtual patient simulation. Statistically significant (p<0.05) improvement occurred in ten of the possible twelve self-reported measurement areas. Greatest magnitude of improvement occurred in the area of counselling (student confidence p<0.0001). Student confidence in all domains (questioning, counselling, referral and product provision) showed a marked increase. Student performance in the quiz also improved, demonstrating a 10% improvement overall for pharmacology knowledge and clinical practice following the simulation. Overall, 85% of students reported the simulation to be engaging and 93% of students felt the virtual patient simulation enhanced learning. The data suggests that the interactive virtual patient simulation developed for clinical pharmacology and therapeutics education enhanced students knowledge, skill and confidence, with respect to the competency domains of questioning, counselling, referral and product provision. These self-reported measures appear to translate to learning outcomes, as demonstrated by the improved student performance in the quiz assessment item. Future research of education using virtual simulation should seek to incorporate modern quantitative measures of student learning and engagement, such as eye tracking.

Keywords: clinical simulation, education, pharmacology, simulation, virtual learning

Procedia PDF Downloads 303
259 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring

Authors: Katerina Krizova, Inigo Molina

Abstract:

The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.

Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content

Procedia PDF Downloads 97
258 Heritage, Cultural Events and Promises for Better Future: Media Strategies for Attracting Tourism during the Arab Spring Uprisings

Authors: Eli Avraham

Abstract:

The Arab Spring was widely covered in the global media and the number of Western tourists traveling to the area began to fall. The goal of this study was to analyze which media strategies marketers in Middle Eastern countries chose to employ in their attempts to repair the negative image of the area in the wake of the Arab Spring. Several studies were published concerning image-restoration strategies of destinations during crises around the globe; however, these strategies were not part of an overarching theory, conceptual framework or model from the fields of crisis communication and image repair. The conceptual framework used in the current study was the ‘multi-step model for altering place image’, which offers three types of strategies: source, message and audience. Three research questions were used: 1.What public relations crisis techniques and advertising campaign components were used? 2. What media policies and relationships with the international media were adopted by Arab officials? 3. Which marketing initiatives (such as cultural and sports events) were promoted? This study is based on qualitative content analysis of four types of data: 1) advertising components (slogans, visuals and text); (2) press interviews with Middle Eastern officials and marketers; (3) official media policy adopted by government decision-maker (e.g. boycotting or arresting newspeople); and (4) marketing initiatives (e.g. organizing heritage festivals and cultural events). The data was located in three channels from December 2010, when the events started, to September 31, 2013: (1) Internet and video-sharing websites: YouTube and Middle Eastern countries' national tourism board websites; (2) News reports from two international media outlets, The New York Times and Ha’aretz; these are considered quality newspapers that focus on foreign news and tend to criticize institutions; (3) Global tourism news websites: eTurbo news and ‘Cities and countries branding’. Using the ‘multi-step model for altering place image,’ the analysis reveals that Middle Eastern marketers and officials used three kinds of strategies to repair their countries' negative image: 1. Source (cooperation and media relations; complying, threatening and blocking the media; and finding alternatives to the traditional media) 2. Message (ignoring, limiting, narrowing or reducing the scale of the crisis; acknowledging the negative effect of an event’s coverage and assuring a better future; promotion of multiple facets, exhibitions and softening the ‘hard’ image; hosting spotlight sporting and cultural events; spinning liabilities into assets; geographic dissociation from the Middle East region; ridicule the existing stereotype) and 3. Audience (changing the target audience by addressing others; emphasizing similarities and relevance to specific target audience). It appears that dealing with their image problems will continue to be a challenge for officials and marketers of Middle Eastern countries until the region stabilizes and its regional conflicts are resolved.

Keywords: Arab spring, cultural events, image repair, Middle East, tourism marketing

Procedia PDF Downloads 259
257 Cost-Conscious Treatment of Basal Cell Carcinoma

Authors: Palak V. Patel, Jessica Pixley, Steven R. Feldman

Abstract:

Introduction: Basal cell carcinoma (BCC) is the most common skin cancer worldwide and requires substantial resources to treat. When choosing between indicated therapies, providers consider their associated adverse effects, efficacy, cosmesis, and function preservation. The patient’s tumor burden, infiltrative risk, and risk of tumor recurrence are also considered. Treatment cost is often left out of these discussions. This can lead to financial toxicity, which describes the harm and quality of life reductions inflicted by high care costs. Methods: We studied the guidelines set forth by the American Academy of Dermatology for the treatment of BCC. A PubMed literature search was conducted to identify the costs of each recommended therapy. We discuss costs alongside treatment efficacy and side-effect profile. Results: Surgical treatment for BCC can be cost-effective if the appropriate treatment is selected for the presenting tumor. Curettage and electrodesiccation can be used in low-grade, low-recurrence tumors in aesthetically unimportant areas. The benefits of cost-conscious care are not likely to be outweighed by the risks of poor cosmesis or tumor return ($471 BCC of the cheek). When tumor burden is limited, MMS offers better cure rates and lower recurrence rates than surgical excision, and with comparable costs (MMS $1263; SE $949). Surgical excision with permanent sections may be indicated when tumor burden is more extensive or if molecular testing is necessary. The utility of surgical excision with frozen sections, which costs substantially more than MMS without comparable outcomes, is less clear (SE with frozen sections $2334-$3085). Less data exists on non-surgical treatments for BCC. These techniques cost less, but recurrence-risk is high. Side-effects of nonsurgical treatment are limited to local skin reactions, and cosmesis is good. Cryotherapy, 5-FU, and MAL-PDT are all more affordable than surgery, but high recurrence rates increase risk of secondary financial and psychosocial burden (recurrence rates 21-39%; cost $100-270). Radiation therapy offers better clearance rates than other nonsurgical treatments but is associated with similar recurrence rates and a significantly larger financial burden ($2591-$3460 BCC of the cheek). Treatments for advanced or metastatic BCC are extremely costly, but few patients require their use, and the societal cost burden remains low. Vismodegib and sonidegib have good response rates but substantial side effects, and therapy should be combined with multidisciplinary care and palliative measures. Expert-review has found sonidegib to be the less expensive and more efficacious option (vismodegib $128,358; sonidegib $122,579). Platinum therapy, while not FDA-approved, is also effective but expensive (~91,435). Immunotherapy offers a new line of treatment in patients intolerant of hedgehog inhibitors ($683,061). Conclusion: Dermatologists working within resource-compressed practices and with resource-limited patients must prudently manage the healthcare dollar. Surgical therapies for BCC offer the lowest risk of recurrence at the most reasonable cost. Non-surgical therapies are more affordable, but high recurrence rates increase the risk of secondary financial and psychosocial burdens. Treatments for advanced BCC are incredibly costly, but the low incidence means the overall cost to the system is low.

Keywords: nonmelanoma skin cancer, basal cell skin cancer, squamous cell skin cancer, cost of care

Procedia PDF Downloads 93
256 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors

Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl

Abstract:

An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.

Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids

Procedia PDF Downloads 106
255 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering

Authors: Olia Kanevskaia

Abstract:

Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.

Keywords: private order, standardization, standard-setting organizations, transnational law

Procedia PDF Downloads 138
254 Ecological Planning Method of Reclamation Area Based on Ecological Management of Spartina Alterniflora: A Case Study of Xihu Harbor in Xiangshan County

Authors: Dong Yue, Hua Chen

Abstract:

The study region Xihu Harbor in Xiangshan County, Ningbo City is located in the central coast of Zhejiang Province. Concerning the wave dispating issue, Ningbo government firstly introduced Spartina alterniflora in 1980s. In the 1990s, S. alterniflora spread so rapidly thus a ‘grassland’ in the sea has been created nowadays. It has become the most important invasive plant of China’s coastal tidal flats. Although S. alterniflora had some ecological and economic functions, it has also brought series of hazards. It has ecological hazards on many aspects, including biomass and biodiversity, hydrodynamic force and sedimentation process, nutrient cycling of tidal flat, succession sequence of soil and plants and so on. On engineering, it courses problems of poor drainage and channel blocking. On economy, the hazard mainly reflected in the threat on aquaculture industry. The purpose of this study is to explore an ecological, feasible and economical way to manage Spartina alterniflora and use the land formed by it, taking Xihu Harbor in Xiangshan County as a case. Comparison method, mathematical modeling, qualitative and quantitative analysis are utilized to proceed the study. Main outcomes are as follows. By comparing a series of S. alterniflora managing methods which include the combination of mechanical cutting and hydraulic reclamation, waterlogging, herbicide and biological substitution from three standpoints – ecology, engineering and economy. It is inferred that the combination of mechanical cutting and hydraulic reclamation is among the top rank of S. alternifora managing methods. The combination of mechanical cutting and hydraulic reclamation means using large-scale mechanical equipment like large screw seagoing dredger to excavate the S. alterniflora with root and mud together. Then the mix of mud and grass was blown off nearby coastal tidal zone transported by pipelines, which can cushion the silt of tidal zone to form a land. However, as man-made land by coast, the reclamation area’s ecological sensitivity is quite high and will face high possibility of flood threat. Therefore, the reclamation area has many reasonability requirements, including ones on location, specific scope, water surface rate, direction of main watercourse, site of water-gate, the ratio of ecological land to urban construction land. These requirements all became important basis when the planning was being made. The water system planning, green space system planning, road structure and land use all need to accommodate the ecological requests. Besides, the profits from the formed land is the managing project’s source of funding, so how to utilize land efficiently is another considered point in the planning. It is concluded that by aiming at managing a large area of S. alterniflora, the combination of mechanical cutting and hydraulic reclamation is an ecological, feasible and economical method. The planning of reclamation area should fully respect the natural environment and possible disasters. Then the planning which makes land use efficient, reasonable, ecological will promote the development of the area’s city construction.

Keywords: ecological management, ecological planning method, reclamation area, Spartina alternifora, Xihu harbor

Procedia PDF Downloads 289
253 Functions and Challenges of New County-Based Regional Plan in Taiwan

Authors: Yu-Hsin Tsai

Abstract:

A new, mandated county regional plan system has been initiated since 2010 nationwide in Taiwan, with its role situated in-between the policy-led cross-county regional plan and the blueprint-led city plan. This new regional plan contain both urban and rural areas in one single plan, which provides a more complete planning territory, i.e., city region within the county’s jurisdiction, and to be executed and managed effectively by the county government. However, the full picture of its functions and characteristics seems still not totally clear, compared with other levels of plans; either are planning goals and issues that can be most appropriately dealt with at this spatial scale. In addition, the extent to which the inclusion of sustainability ideal and measures to cope with climate change are unclear. Based on the above issues, this study aims to clarify the roles of county regional plan, to analyze the extent to which the measures cope with sustainability, climate change, and forecasted declining population, and the success factors and issues faced in the planning process. The methodology applied includes literature review, plan quality evaluation, and interview with officials of the central and local governments and urban planners involved for all the 23 counties in Taiwan. The preliminary research results show, first, growth management related policies have been widely implemented and expected to have effective impact, including incorporating resources capacity to determine maximum population for the city region as a whole, developing overall vision of urban growth boundary for all the whole city region, prioritizing infill development, and use of architectural land within urbanized area over rural area to cope with urban growth. Secondly, planning-oriented zoning is adopted in urban areas, while demand-oriented planning permission is applied in the rural areas with designated plans. Then, public participation has been evolved to the next level to oversee all of government’s planning and review processes due to the decreasing trust in the government, and development of public forum on the internet etc. Next, fertile agricultural land is preserved to maintain food self-supplied goal for national security concern. More adoption-based methods than mitigation-based methods have been applied to cope with global climate change. Finally, better land use and transportation planning in terms of avoiding developing rail transit stations and corridor in rural area is promoted. Even though many promising, prompt measures have been adopted, however, challenges exist to surround: first, overall urban density, likely affecting success of UGB, or use of rural agricultural land, has not been incorporated, possibly due to implementation difficulties. Second, land-use related measures to mitigating climate change seem less clear and hence less employed. Smart decline has not drawn enough attention to cope with predicted population decrease in the next decade. Then, some reluctance from county’s government to implement county regional plan can be observed vaguely possibly since limits have be set on further development on agricultural land and sensitive areas. Finally, resolving issue on existing illegal factories on agricultural land remains the most challenging dilemma.

Keywords: city region plan, sustainability, global climate change, growth management

Procedia PDF Downloads 323
252 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 145
251 Eco-Politics of Infrastructure Development in and Around Protected Areas in Kenya: The Case of Nairobi National Park

Authors: Teresa Wanjiru Mbatia

Abstract:

On 7th June 2011, the government Minister of Roads in Kenya announced the proposed construction of a major highway known as a southern bypass to run on the northern border of the Nairobi National Park. The following day on 8th June 2011, the chairperson of the Friends of Nairobi National Park (FONNAP) posted a protest statement on their website, with the heading, ‘Nairobi Park is Not a cake’ alerting its members and conservation groups, with the aim of getting support to the campaign against the government’s intention to hive off a section of the park for road construction. This was the first and earliest statement that led to a series of other events that culminated in conservationists and some other members of the public campaign against the government’s plan to hive off sections of the park to build road and railway infrastructure in or around the park. Together with other non-state actors, mostly non-governmental organisations in conservation/environment and tourism businesses, FoNNAP issued a series of other statements on social, print and electronic media to battle against road and railway construction. This paper examined the strategies, outcomes and interests of actors involved in opposing/proposing the development of transport infrastructure in and around the Nairobi National Park. Specifically, the objectives were to analyse the: (1) Arguments put forward by the eco-warriors to protest infrastructure development; (2) Background and interests of the eco-warriors; (3) Needs/interests and opinions of ordinary common citizens on transport infrastructural development, particularly in and around the urban nature reserve and (4) Final outcomes of the eco-politics surrounding infrastructure development in and around Nairobi National Park. The methodological approach used was environmental history and the social construction of nature. The study collected combined qualitative data using four main approaches, the grounded theory approach, narratives, case studies and a phenomenological approach. The information collected was analysed using critical discourse analysis. The major findings of the study were that under the guise of “public participation,” influential non-state actors have the capacity to perpetuate social-spatial inequalities in the form of curtailing the majority from accessing common public goods. A case in point in this study is how the efforts of powerful conservationists, environmentalists, and tourism businesspersons managed to stall the construction of much-needed road and railway infrastructure severally through litigations in lengthy environmental court processes involving injunctions and stop orders to the government bodies in charge. Moreover, powerful non-state actors were found to have formed informal and sometimes formal coalitions with politicians with selfish interests, which serves to deepen the exclusionary practices and the common good. The study concludes that mostly composed of certain types of elites (NGOs, business communities, politicians and privileged social-cultural groups), non-state actors have used participatory policies to advance their own interests at the expense of the majority whom they claim to represent. These practices are traced to the historically unjust social, political, and economic forces involved in the production of space in Nairobi.

Keywords: eco-politics, exclusion, infrastructure, Nairobi national park, non-state actors, protests

Procedia PDF Downloads 154
250 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 405
249 Sandstone-Hosted Copper Mineralization in Oligo-Miocene-Red-Bed Strata, Chalpo North East of Iran: Constraints from Lithostratigraphy, Lithogeochemistry, Mineralogy, Mass Change Technique, and Ree Distribution

Authors: Mostafa Feiz, Hossein Hadizadeh, Mohammad Safari

Abstract:

The Chalpo copper area is located in northeastern Iran, which is part of the structural zone of central Iran and the back-arc basin of Sabzevar. This sedimentary basin accumulated in destructive-Oligomiocene sediments is named the Nasr-Chalpo-Sangerd (NCS) basin. The sedimentary layers in this basin originated mainly from Upper Cretaceous ophiolitic rocks and intermediate to mafic-post ophiolitic volcanic rocks, deposited as a nonconformity. The mineralized sandstone layers in the Chalpo area include leached zones (with a thickness of 5 to 8 meters) and mineralized lenses with a thickness of 0.5 to 0.7 meters. Ore minerals include primary sulfide minerals, such as chalcocite, chalcopyrite, and pyrite, as well as secondary minerals, such as covellite, digenite, malachite, and azurite, formed in three stages that comprise primary, simultaneously, and supergene stage. The best agents that control the mineralization in this area include the permeability of host rocks, the presence of fault zones as the conduits for copper oxide solutions, and significant amounts of plant fossils, which create a reducing environment for the deposition of mineralized layers. Statistical studies on copper layers indicate that Ag, Cd, Mo, and S have the maximum positive correlation with Cu, whereas TiO₂, Fe₂O₃, Al₂O₃, Sc, Tm, Sn, and the REEs have a negative correlation. The calculations of mass changes on copper-bearing layers and primary sandstone layers indicate that Pb, As, Cd, Te, and Mo are enriched in the mineralized zones, whereas SiO₂, TiO₂, Fe₂O₃, V, Sr, and Ba are depleted. The combination of geological, stratigraphic, and geochemical studies suggests that the origin of copper may have been the underlying red strata that contained hornblende, plagioclase, biotite, alkaline feldspar, and labile minerals. Dehydration and hydrolysis of these minerals during the diagenetic process caused the leaching of copper and associated elements by circling fluids, which formed an oxidant-hydrothermal solution. Copper and silver in this oxidant solution might have moved upwards through the basin-fault zones and deposited in the reducing environments in the sandstone layers that have had abundant organic matters. Copper in these solutions probably was carried by chloride complexes. The collision of oxidant and reduced solutions caused the deposition of Cu and Ag, whereas some stable elements in oxidant environments (e.g., Fe₂O₃, TiO₂, SiO₂, REEs) become unstable in the reduced condition. Therefore, the copper-bearing sandstones in the study area are depleted from these elements resulting from the leaching process. The results indicate that during the mineralization stage, LREEs and MREEs were depleted, but Cu, Ag, and S were enriched. Based on field evidence, it seems that the circulation of connate fluids in the reb-bed strata, produced by diagenetic processes, encountered to reduced facies, which formed earlier by abundant fossil-plant debris in the sandstones, is the best model for precipitating sulfide-copper minerals.

Keywords: Chalpo, oligo-miocene red beds, sandstone-hosted copper mineralization, mass change, LREEs, MREEs

Procedia PDF Downloads 40
248 Delicate Balance between Cardiac Stress and Protection: Role of Mitochondrial Proteins

Authors: Zuzana Tatarkova, Ivana Pilchova, Michal Cibulka, Martin Kolisek, Peter Racay, Peter Kaplan

Abstract:

Introduction: Normal functioning of mitochondria is crucial for cardiac performance. Mitochondria undergo mitophagy and biogenesis, and mitochondrial proteins are subject to extensive post-translational modifications. The state of mitochondrial homeostasis reflects overall cellular fitness and longevity. Perturbed mitochondria produce less ATP, release greater amounts of reactive molecules, and are more prone to apoptosis. Therefore mitochondrial turnover is an integral aspect of quality control in which dysfunctional mitochondria are selectively eliminated through mitophagy. Currently, the progressive deterioration of physiological functions is seen as accumulation of modified/damaged proteins with limiting regenerative ability and disturbance of such affected protein-protein communication throughout aging in myocardial cells. Methodologies: For our study was used immunohistochemistry, biochemical methods: spectrophotometry, western blotting, immunodetection as well as more sophisticated 2D electrophoresis and mass spectrometry for evaluation protein-protein interactions and specific post-translational modification. Results and Discussion: Mitochondrial stress response to reactive species was evaluated as electron transport chain (ETC) complexes, redox-active molecules, and their possible communication. Protein-protein interactions revealed a strong linkage between age and ETC protein subunits. Redox state was strongly affected in senescent mitochondria with shift in favor of more pro-oxidizing condition within cardiomyocytes. Acute myocardial ischemia and ischemia-reperfusion (IR) injury affected ETC complexes I, II and IV with no change in complex III. Ischemia induced decrease in total antioxidant capacity, MnSOD, GSH and catalase activity with recovery in some extent during reperfusion. While MnSOD protein content was higher in IR group, activity returned to 95% of control. Nitric oxide is one of the biological molecules that can out compete MnSOD for superoxide and produce peroxynitrite. This process is faster than dismutation and led to the 10-fold higher production of nitrotyrosine after IR injury in adult with higher protection in senescent ones. 2D protein profiling revealed 140 mitochondrial proteins, 12 of them with significant changes after IR injury and 36 individual nitrotyrosine-modified proteins further identified by mass spectrometry. Linking these two groups, 5 proteins were altered after IR as well as nitrated, but only one showed massive nitration per lowering content of protein after IR injury in adult. Conclusions: Senescent cells have greater proportion of protein content, which might be modulated by several post-translational modifications. If these protein modifications are connected to functional consequences and protein-protein interactions are revealed, link may lead to the solution. Assume all together, dysfunctional proteostasis can play a causative role and restoration of protein homeostasis machinery is protective against aging and possibly age-related disorders. This work was supported by the project VEGA 1/0018/18 and by project 'Competence Center for Research and Development in the field of Diagnostics and Therapy of Oncological diseases', ITMS: 26220220153, co-financed from EU sources.

Keywords: aging heart, mitochondria, proteomics, redox state

Procedia PDF Downloads 145
247 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 32
246 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage

Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti

Abstract:

Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.

Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage

Procedia PDF Downloads 132
245 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake

Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil

Abstract:

The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.

Keywords: Gorkha earthquake, reconstruction, challenges, policy

Procedia PDF Downloads 373
244 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 155
243 Start with the Art: Early Results from a Study of Arts-Integrated Instruction for Young Children

Authors: Juliane Toce, Steven Holochwost

Abstract:

A substantial and growing literature has demonstrated that arts education benefits young children’s socioemotional and cognitive development. Less is known about the capacity of arts-integrated instruction to yield benefits to similar domains, particularly among demographically and socioeconomically diverse groups of young children. However, the small literature on this topic suggests that arts-integrated instruction may foster young children’s socioemotional and cognitive development by presenting opportunities to 1) engage in instructional content in diverse ways, 2) experience and regulate strong emotions, 3) experience growth-oriented feedback, and 4) engage in collaborative work with peers. Start with the Art is a new program of arts-integrated instruction currently being implemented in four schools in a school district that serves students from a diverse range of backgrounds. The program employs a co-teaching model in which teaching artists and classroom teachers engage in collaborative lesson planning and instruction over the course of the academic year and is currently the focus of an impact study featuring a randomized-control design, as well as an implementation study, both of which are funded through an Educational Innovation and Research grant from the United States Department of Education. The paper will present the early results from the Start with the Art implementation study. These results will provide an overview of the extent to which the program was implemented in accordance with design, with a particular emphasis on the degree to which the four opportunities enumerated above (e.g., opportunities to engage in instructional content in diverse ways) were presented to students. There will be a review key factors that may influence the fidelity of implementation, including classroom teachers’ reception of the program and the extent to which extant conditions in the classroom (e.g., the overall level of classroom organization) may have impacted implementation fidelity. With the explicit purpose of creating a program that values and meets the needs of the teachers and students, Start with the Art incorporates the feedback from individuals participating in the intervention. Tracing its trajectory from inception to ongoing development and examining the adaptive changes made in response to teachers' transformative experiences in the post-pandemic classroom, Start with the Art continues to solicit input from experts in integrating artistic content into core curricula within educational settings catering to students from under-represented backgrounds in the arts. Leveraging the input from this rich consortium of experts has allowed for a comprehensive evaluation of the program’s implementation. The early findings derived from the implementation study emphasize the potential of arts-integrated instruction to incorporate restorative practices. Such practices serve as a crucial support system for both students and educators, providing avenues for children to express themselves, heal emotionally, and foster social development, while empowering teachers to create more empathetic, inclusive, and supportive learning environments. This all-encompassing analysis spotlights Start with the Art’s adaptability to any learning environment through the program’s effectiveness, resilience, and its capacity to transform - through art - the classroom experience within the ever-evolving landscape of education.

Keywords: arts-integration, social emotional learning, diverse learners, co-teaching, teaching artists, post-pandemic teaching

Procedia PDF Downloads 37
242 A Perspective on Allelopathic Potential of Corylus avellana L.

Authors: Tugba G. Isin Ozkan, Yoshiharu Fujii

Abstract:

One of the most important constrains that decrease the crop yields are weeds. Increased amount and number of chemical herbicides are being utilized every day to control weeds. Chemical herbicides which cause environmental effects, and limitations on implementation of them have led to the nonchemical alternatives in the management of weeds. It is needed increasingly the application of allelopathy as a nonherbicidal innovation to control weed populations in integrated weed management. It is not only because of public concern about herbicide use, but also increased agricultural costs and herbicide resistance weeds. Allelopathy is defined as a common biological phenomenon, direct or indirect interaction which one plant or organism produces biochemicals influence the physiological processes of another neighboring plant or organism. Biochemicals involved in allelopathy are called allelochemicals that influence beneficially or detrimentally the growth, survival, development, and reproduction of other plant or organisms. All plant parts could have allelochemicals which are secondary plant metabolites. Allelochemicals are released to environment, influence the germination and seedling growth of neighbors' weeds; that is the way how allelopathy is applied for weed control. Crop cultivars have significantly different ability for inhibiting the growth of certain weeds. So, a high commercial value crop Corylus avellana L. and its byproducts were chosen to introduce for their allelopathic potential in this research. Edible nut of Corylus avellana L., commonly known as hazelnut is commercially valuable crop with byproducts; skin, hard shell, green leafy cover, and tree leaf. Research on allelopathic potential of a plant by using the sandwich bioassay method and investigation growth inhibitory activity is the first step to develop new and environmentally friendly alternatives for weed control. Thus, the objective of this research is to determine allelopathic potential of C. avellana L. and its byproducts by using sandwich method and to determine effective concentrations (EC) of their extracts for inducing half-maximum elongation inhibition on radicle of test plant, EC50. The sandwich method is reliable and fast bioassay, very useful for allelopathic screening under laboratory conditions. In experiments, lettuce (Lactuca sativa L.) seeds will be test plant, because of its high sensitivity to inhibition by allelochemicals and reliability for germination. In sandwich method, the radicle lengths of dry material treated lettuce seeds and control lettuce seeds will be measured and inhibition of radicle elongation will be determined. Lettuce seeds will also be treated by the methanol extracts of dry hazelnut parts to calculate EC₅₀ values, which are required to induce half-maximal inhibition of growth, as mg dry weight equivalent mL-1. Inhibitory activity of extracts against lettuce seedling elongation will be evaluated, like in sandwich method, by comparing the radicle lengths of treated seeds with that of control seeds and EC₅₀ values will be determined. Research samples are dry parts of Turkish hazelnut, C. avellana L. The results would suggest the opportunity for allelopathic potential of C. avellana L. with its byproducts in plant-plant interaction, might be utilized for further researches, could be beneficial in finding bioactive chemicals from natural products and developing of natural herbicides.

Keywords: allelopathy, Corylus avellana L., EC50, Lactuca sativa L., sandwich method, Turkish hazelnut

Procedia PDF Downloads 148
241 The Stability of Vegetable-Based Synbiotic Drink during Storage

Authors: Camelia Vizireanu, Daniela Istrati, Alina Georgiana Profir, Rodica Mihaela Dinica

Abstract:

Globally, there is a great interest in promoting the consumption of fruit and vegetables to improve health. Due to the content of essential compounds such as antioxidants, important amounts of fruits and vegetables should be included in the daily diet. Juices are good sources of vitamins and can also help increase overall fruit and vegetable consumption. Starting from this trend (introduction into the daily diet of vegetables and fruits) as well as the desire to diversify the range of functional products for both adults and children, a fermented juice was made using probiotic microorganisms based on root vegetables, with potential beneficial effects in the diet of children, vegetarians and people with lactose intolerance. The three vegetables selected for this study, red beet, carrot, and celery bring a significant contribution to functional compounds such as carotenoids, flavonoids, betalain, vitamin B and C, minerals and fiber. By fermentation, the functional value of the vegetable juice increases due to the improved stability of these compounds. The combination of probiotic microorganisms and vegetable fibers resulted in a nutrient-rich synbiotic product. The stability of the nutritional and sensory qualities of the obtained synbiotic product has been tested throughout its shelf life. The evaluation of the physico-chemical changes of the synbiotic drink during storage confirmed that: (i) vegetable juice enriched with honey and vegetable pulp is an important source of nutritional compounds, especially carbohydrates and fiber; (ii) microwave treatment used to inhibit pathogenic microflora did not significantly affect nutritional compounds in vegetable juice, vitamin C concentration remained at baseline and beta-carotene concentration increased due to increased bioavailability; (iii) fermentation has improved the nutritional quality of vegetable juice by increasing the content of B vitamins, polyphenols and flavonoids and has a good antioxidant capacity throughout the shelf life; (iv) the FTIR and Raman spectra have highlighted the results obtained using physicochemical methods. Based on the analysis of IR absorption frequencies, the most striking bands belong to the frequencies 3330 cm⁻¹, 1636 cm⁻¹ and 1050 cm⁻¹, specific for groups of compounds such as polyphenols, carbohydrates, fatty acids, and proteins. Statistical data processing revealed a good correlation between the content of flavonoids, betalain, β-carotene, ascorbic acid and polyphenols, the fermented juice having a stable antioxidant activity. Also, principal components analysis showed that there was a negative correlation between the evolution of the concentration of B vitamins and antioxidant activity. Acknowledgment: This study has been founded by the Francophone University Agency, Project Réseau régional dans le domaine de la santé, la nutrition et la sécurité alimentaire (SaIN), No. at Dunarea de Jos University of Galati 21899/ 06.09.2017 and by the Sectorial Operational Programme Human Resources Development of the Romanian Ministry of Education, Research, Youth and Sports trough the Financial Agreement POSDRU/159/1.5/S/132397 ExcelDOC.

Keywords: bioactive compounds, fermentation, synbiotic drink from vegetables, stability during storage

Procedia PDF Downloads 128
240 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade

Authors: Bukra Kalayci, Geraldine Brennan

Abstract:

Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).

Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination

Procedia PDF Downloads 180
239 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach

Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert

Abstract:

Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.

Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems

Procedia PDF Downloads 126
238 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake

Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel

Abstract:

Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.

Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building

Procedia PDF Downloads 125
237 Pivoting to Fortify our Digital Self: Revealing the Need for Personal Cyber Insurance

Authors: Richard McGregor, Carmen Reaiche, Stephen Boyle

Abstract:

Cyber threats are a relatively recent phenomenon and offer cyber insurers a dynamic and intelligent peril. As individuals en mass become increasingly digitally dependent, Personal Cyber Insurance (PCI) offers an attractive option to mitigate cyber risk at a personal level. This abstract proposes a literature review that conceptualises a framework for siting Personal Cyber Insurance (PCI) within the context of cyberspace. The lack of empirical research within this domain demonstrates an immediate need to define the scope of PCI to allow cyber insurers to understand personal cyber risk threats and vectors, customer awareness, capabilities, and their associated needs. Additionally, this will allow cyber insurers to conceptualise appropriate frameworks allowing effective management and distribution of PCI products and services within a landscape often in-congruent with risk attributes commonly associated with traditional personal line insurance products. Cyberspace has provided significant improvement to the quality of social connectivity and productivity during past decades and allowed enormous capability uplift of information sharing and communication between people and communities. Conversely, personal digital dependency furnish ample opportunities for adverse cyber events such as data breaches and cyber-attacksthus introducing a continuous and insidious threat of omnipresent cyber risk–particularly since the advent of the COVID-19 pandemic and wide-spread adoption of ‘work-from-home’ practices. Recognition of escalating inter-dependencies, vulnerabilities and inadequate personal cyber behaviours have prompted efforts by businesses and individuals alike to investigate strategies and tactics to mitigate cyber risk – of which cyber insurance is a viable, cost-effective option. It is argued that, ceteris parabus, the nature of cyberspace intrinsically provides characteristic peculiarities that pose significant and bespoke challenges to cyber insurers, often in-congruent with risk attributes commonly associated with traditional personal line insurance products. These challenges include (inter alia) a paucity of historical claim/loss data for underwriting and pricing purposes, interdependencies of cyber architecture promoting high correlation of cyber risk, difficulties in evaluating cyber risk, intangibility of risk assets (such as data, reputation), lack of standardisation across the industry, high and undetermined tail risks, and moral hazard among others. This study proposes a thematic overview of the literature deemed necessary to conceptualise the challenges to issuing personal cyber coverage. There is an evident absence of empirical research appertaining to PCI and the design of operational business models for this business domain, especially qualitative initiatives that (1) attempt to define the scope of the peril, (2) secure an understanding of the needs of both cyber insurer and customer, and (3) to identify elements pivotal to effective management and profitable distribution of PCI - leading to an argument proposed by the author that postulates that the traditional general insurance customer journey and business model are ill-suited for the lineaments of cyberspace. The findings of the review confirm significant gaps in contemporary research within the domain of personal cyber insurance.

Keywords: cyberspace, personal cyber risk, personal cyber insurance, customer journey, business model

Procedia PDF Downloads 80
236 The Influence of English Immersion Program on Academic Performance: Case Study at a Sino-US Cooperative University in China

Authors: Leah Li Echiverri, Haoyu Shang, Yue Li

Abstract:

Wenzhou-Kean University (WKU) is a Sino-US Cooperative University in China. It practices the English Immersion Program (EIP), where all the courses are taught in English. Class discussions and presentations are pervasively interwoven in designing students’ learning experiences. This WKU model has brought positive influences on students and is in some way ahead of traditional college English majors. However, literature to support the perceptions on the positive outcomes of this teaching and learning model remain scarce. The distinctive profile of Chinese-ESL students in an English Medium of Instruction (EMI) environment contributes further to the scarcity of literature compared to existing studies conducted among ESL learners in Western educational settings. Hence, the study investigated the students’ perceptions towards the English Immersion Program and determine how it influences Chinese-ESL students’ academic performance (AP). This research can provide empirical data that would be helpful to educators, teaching practitioners, university administrators, and other researchers in making informed decisions when developing curricular reforms, instructional and pedagogical methods, and university-wide support programs using this educational model. The purpose of the study was to establish the relationship between the English Immersion Program and Academic Performance among Chinese-ESL students enrolled at WKU for the academic year 2020-2021. Course length, immersion location, course type, and instructional design were the constructs of the English immersion program. English language learning, learning efficiency, and class participation were used to measure academic performance. Descriptive-correlational design was used in this cross-sectional research project. A quantitative approach for data analysis was applied to determine the relationship between the English immersion program and Chinese-ESL students’ academic performance. The research was conducted at WKU; a Chinese-American jointly established higher educational institution located in Wenzhou, Zhejiang province. Convenience, random, and snowball sampling of 283 students, a response rate of 10.5%, were applied to represent the WKU student population. The questionnaire was posted through the survey website named Wenjuanxing and shared to QQ or WeChat. Cronbach’s alpha was used to test the reliability of the research instrument. Findings revealed that when professors integrate technology (PowerPoint, videos, and audios) in teaching, students pay more attention. This contributes to the acquisition of more professional knowledge in their major courses. As to course immersion, students perceive WKU as a good place to study, providing them a high degree of confidence to talk with their professors in English. This also contributes to their English fluency and better pronunciation in their communication. In the construct of designing instruction, the use of pictures, video clips, and professors’ non-verbal communication, and demonstration of concern for students encouraged students to be more active in-class participation. Findings on course length and academic performance indicated that students’ perception regarding taking courses during fall and spring terms can moderately contribute to their academic performance. In conclusion, the findings revealed a significantly strong positive relationship between course type, immersion location, instructional design, and academic performance.

Keywords: class participation, English immersion program, English language learning, learning efficiency

Procedia PDF Downloads 150
235 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components

Authors: Francesca Gullo, Paola Palmero, Massimo Messori

Abstract:

Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.

Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites

Procedia PDF Downloads 24
234 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 203
233 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 53