Search results for: quality of higher education
362 Testing Two Actors Contextual Interaction Theory in a Multi Actors Context: Case of COVID-19 Disease Prevention and Control Policy
Authors: Muhammad Fayyaz Nazir, Ellen Wayenberg, Shahzadaah Faahed Qureshi
Abstract:
Introduction: The study is based on the Contextual Interaction Theory (CIT) constructs to explore the role of policy actors in implementing the COVID-19 Disease Prevention and Control (DP&C) Policy. The study analyzes the role of healthcare workers' contextual factors, such as cognition, motives, and resources, and their interactions in implementing Social Distancing (SD). In this way, we test a two actors policy implementation theory, i.e., the CIT in a three-actor context. Methods: Data was collected through document analysis and semi-structured interviews. For a qualitative study design, interviews were conducted with questions on cognition, motives, and resources from the healthcare workers involved in implementing SD in the local context in Multan – Pakistan. The possible interactions resulting from contextual factors of the policy actors – healthcare workers were identified through framework analysis protocol guided by CIT and supported by trustworthiness criterion and data saturation. Results: This inquiry resulted in theory application, addition, and enrichment. The theoretical application in the three actor's contexts illustrates the different levels of motives, cognition, and resources of healthcare workers – senior administrators, managers, and healthcare professionals. The senior administrators working in National Command and Operations Center (NCOC), Provincial Technical Committees (PTCs), and Districts Covid Teams (DCTs) were playing their role with high motivation. They were fully informed about the policy and moderately resourceful. The policy implementors: healthcare managers working on implementing the SD within their respective hospitals were playing their role with high motivation and were fully informed about the policy. However, they lacked the required resources to implement SD. The target medical and allied healthcare professionals were moderately motivated but lack of resources and information. The interaction resulted in cooperation and the need for learning to manage the future healthcare crisis. However, the lack of resources created opposition to the implementation of SD. Objectives of the Study: The study aimed to apply a two actors theory in a multi actors context. We take this as an opportunity to qualitatively test the theory in a novel situation of the Covid-19 pandemic and make way for its quantitative application by designing a survey instrument so that implementation researchers can apply CIT through multivariate analyses or higher-order statistical modeling. Conclusion: Applying two actors' implementation theory in exploring a complex case of healthcare intervention in three actors context is a unique work that has never been done before, up to the best of our knowledge. So, the work will contribute to the policy implementation studies by applying, extending, and enriching an implementation theory in a novel case of the Covi-19 pandemic, ultimately fulfilling the gap in implementation literature. Policy institutions and other low or middle-income countries can learn from this research and improve SD implementation by working on the variables with weak significance levels.Keywords: COVID-19, disease prevention and control policy, implementation, policy actors, social distancing
Procedia PDF Downloads 58361 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks
Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry
Abstract:
Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 303360 Cost-Conscious Treatment of Basal Cell Carcinoma
Authors: Palak V. Patel, Jessica Pixley, Steven R. Feldman
Abstract:
Introduction: Basal cell carcinoma (BCC) is the most common skin cancer worldwide and requires substantial resources to treat. When choosing between indicated therapies, providers consider their associated adverse effects, efficacy, cosmesis, and function preservation. The patient’s tumor burden, infiltrative risk, and risk of tumor recurrence are also considered. Treatment cost is often left out of these discussions. This can lead to financial toxicity, which describes the harm and quality of life reductions inflicted by high care costs. Methods: We studied the guidelines set forth by the American Academy of Dermatology for the treatment of BCC. A PubMed literature search was conducted to identify the costs of each recommended therapy. We discuss costs alongside treatment efficacy and side-effect profile. Results: Surgical treatment for BCC can be cost-effective if the appropriate treatment is selected for the presenting tumor. Curettage and electrodesiccation can be used in low-grade, low-recurrence tumors in aesthetically unimportant areas. The benefits of cost-conscious care are not likely to be outweighed by the risks of poor cosmesis or tumor return ($471 BCC of the cheek). When tumor burden is limited, MMS offers better cure rates and lower recurrence rates than surgical excision, and with comparable costs (MMS $1263; SE $949). Surgical excision with permanent sections may be indicated when tumor burden is more extensive or if molecular testing is necessary. The utility of surgical excision with frozen sections, which costs substantially more than MMS without comparable outcomes, is less clear (SE with frozen sections $2334-$3085). Less data exists on non-surgical treatments for BCC. These techniques cost less, but recurrence-risk is high. Side-effects of nonsurgical treatment are limited to local skin reactions, and cosmesis is good. Cryotherapy, 5-FU, and MAL-PDT are all more affordable than surgery, but high recurrence rates increase risk of secondary financial and psychosocial burden (recurrence rates 21-39%; cost $100-270). Radiation therapy offers better clearance rates than other nonsurgical treatments but is associated with similar recurrence rates and a significantly larger financial burden ($2591-$3460 BCC of the cheek). Treatments for advanced or metastatic BCC are extremely costly, but few patients require their use, and the societal cost burden remains low. Vismodegib and sonidegib have good response rates but substantial side effects, and therapy should be combined with multidisciplinary care and palliative measures. Expert-review has found sonidegib to be the less expensive and more efficacious option (vismodegib $128,358; sonidegib $122,579). Platinum therapy, while not FDA-approved, is also effective but expensive (~91,435). Immunotherapy offers a new line of treatment in patients intolerant of hedgehog inhibitors ($683,061). Conclusion: Dermatologists working within resource-compressed practices and with resource-limited patients must prudently manage the healthcare dollar. Surgical therapies for BCC offer the lowest risk of recurrence at the most reasonable cost. Non-surgical therapies are more affordable, but high recurrence rates increase the risk of secondary financial and psychosocial burdens. Treatments for advanced BCC are incredibly costly, but the low incidence means the overall cost to the system is low.Keywords: nonmelanoma skin cancer, basal cell skin cancer, squamous cell skin cancer, cost of care
Procedia PDF Downloads 124359 The Role of Group Interaction and Managers’ Risk-willingness for Business Model Innovation Decisions: A Thematic Analysis
Authors: Sarah Müller-Sägebrecht
Abstract:
Today’s volatile environment challenges executives to make the right strategic decisions to gain sustainable success. Entrepreneurship scholars postulate mainly positive effects of environmental changes on entrepreneurship behavior, such as developing new business opportunities, promoting ingenuity, and the satisfaction of resource voids. A strategic solution approach to overcome threatening environmental changes and catch new business opportunities is business model innovation (BMI). Although this research stream has gained further importance in the last decade, BMI research is still insufficient. Especially BMI barriers, such as inefficient strategic decision-making processes, need to be identified. Strategic decisions strongly impact organizational future and are, therefore, usually made in groups. Although groups draw on a more extensive information base than single individuals, group-interaction effects can influence the decision-making process - in a favorable but also unfavorable way. Decisions are characterized by uncertainty and risk, whereby their intensity is perceived individually differently. The individual risk-willingness influences which option humans choose. The special nature of strategic decisions, such as in BMI processes, is that these decisions are not made individually but in groups due to their high organizational scope. These groups consist of different personalities whose individual risk-willingness can vary considerably. It is known from group decision theory that these individuals influence each other, observable in different group-interaction effects. The following research questions arise: i) How does group interaction shape BMI decision-making from managers’ perspective? ii) What are the potential interrelations among managers’ risk-willingness, group biases, and BMI decision-making? After conducting 26 in-depth interviews with executives from the manufacturing industry, applied Gioia methodology reveals the following results: i) Risk-averse decision-makers have an increased need to be guided by facts. The more information available to them, the lower they perceive uncertainty and the more willing they are to pursue a specific decision option. However, the results also show that social interaction does not change the individual risk-willingness in the decision-making process. ii) Generally, it could be observed that during BMI decisions, group interaction is primarily beneficial to increase the group’s information base for making good decisions, less than for social interaction. Further, decision-makers mainly focus on information available to all decision-makers in the team but less on personal knowledge. This work contributes to strategic decision-making literature twofold. First, it gives insights into how group-interaction effects influence an organization’s strategic BMI decision-making. Second, it enriches risk-management research by highlighting how individual risk-willingness impacts organizational strategic decision-making. To date, it was known in BMI research that risk aversion would be an internal BMI barrier. However, with this study, it becomes clear that it is not risk aversion that inhibits BMI. Instead, the lack of information prevents risk-averse decision-makers from choosing a riskier option. Simultaneously, results show that risk-averse decision-makers are not easily carried away by the higher risk-willingness of their team members. Instead, they use social interaction to gather missing information. Therefore, executives need to provide sufficient information to all decision-makers to catch promising business opportunities.Keywords: business model innovation, cognitive biases, group-interaction effects, strategic decision-making, risk-willingness
Procedia PDF Downloads 78358 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering
Authors: Olia Kanevskaia
Abstract:
Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.Keywords: private order, standardization, standard-setting organizations, transnational law
Procedia PDF Downloads 163357 Exploring Antimicrobial Resistance in the Lung Microbial Community Using Unsupervised Machine Learning
Authors: Camilo Cerda Sarabia, Fernanda Bravo Cornejo, Diego Santibanez Oyarce, Hugo Osses Prado, Esteban Gómez Terán, Belén Diaz Diaz, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Antimicrobial resistance (AMR) represents a significant and rapidly escalating global health threat. Projections estimate that by 2050, AMR infections could claim up to 10 million lives annually. Respiratory infections, in particular, pose a severe risk not only to individual patients but also to the broader public health system. Despite the alarming rise in resistant respiratory infections, AMR within the lung microbiome (microbial community) remains underexplored and poorly characterized. The lungs, as a complex and dynamic microbial environment, host diverse communities of microorganisms whose interactions and resistance mechanisms are not fully understood. Unlike studies that focus on individual genomes, analyzing the entire microbiome provides a comprehensive perspective on microbial interactions, resistance gene transfer, and community dynamics, which are crucial for understanding AMR. However, this holistic approach introduces significant computational challenges and exposes the limitations of traditional analytical methods such as the difficulty of identifying the AMR. Machine learning has emerged as a powerful tool to overcome these challenges, offering the ability to analyze complex genomic data and uncover novel insights into AMR that might be overlooked by conventional approaches. This study investigates microbial resistance within the lung microbiome using unsupervised machine learning approaches to uncover resistance patterns and potential clinical associations. it downloaded and selected lung microbiome data from HumanMetagenomeDB based on metadata characteristics such as relevant clinical information, patient demographics, environmental factors, and sample collection methods. The metadata was further complemented by details on antibiotic usage, disease status, and other relevant descriptions. The sequencing data underwent stringent quality control, followed by a functional profiling focus on identifying resistance genes through specialized databases like Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. Subsequent analyses employed unsupervised machine learning techniques to unravel the structure and diversity of resistomes in the microbial community. Some of the methods employed were clustering methods such as K-Means and Hierarchical Clustering enabled the identification of sample groups based on their resistance gene profiles. The work was implemented in python, leveraging a range of libraries such as biopython for biological sequence manipulation, NumPy for numerical operations, Scikit-learn for machine learning, Matplotlib for data visualization and Pandas for data manipulation. The findings from this study provide insights into the distribution and dynamics of antimicrobial resistance within the lung microbiome. By leveraging unsupervised machine learning, we identified novel resistance patterns and potential drivers within the microbial community.Keywords: antibiotic resistance, microbial community, unsupervised machine learning., sequences of AMR gene
Procedia PDF Downloads 24356 Functions and Challenges of New County-Based Regional Plan in Taiwan
Authors: Yu-Hsin Tsai
Abstract:
A new, mandated county regional plan system has been initiated since 2010 nationwide in Taiwan, with its role situated in-between the policy-led cross-county regional plan and the blueprint-led city plan. This new regional plan contain both urban and rural areas in one single plan, which provides a more complete planning territory, i.e., city region within the county’s jurisdiction, and to be executed and managed effectively by the county government. However, the full picture of its functions and characteristics seems still not totally clear, compared with other levels of plans; either are planning goals and issues that can be most appropriately dealt with at this spatial scale. In addition, the extent to which the inclusion of sustainability ideal and measures to cope with climate change are unclear. Based on the above issues, this study aims to clarify the roles of county regional plan, to analyze the extent to which the measures cope with sustainability, climate change, and forecasted declining population, and the success factors and issues faced in the planning process. The methodology applied includes literature review, plan quality evaluation, and interview with officials of the central and local governments and urban planners involved for all the 23 counties in Taiwan. The preliminary research results show, first, growth management related policies have been widely implemented and expected to have effective impact, including incorporating resources capacity to determine maximum population for the city region as a whole, developing overall vision of urban growth boundary for all the whole city region, prioritizing infill development, and use of architectural land within urbanized area over rural area to cope with urban growth. Secondly, planning-oriented zoning is adopted in urban areas, while demand-oriented planning permission is applied in the rural areas with designated plans. Then, public participation has been evolved to the next level to oversee all of government’s planning and review processes due to the decreasing trust in the government, and development of public forum on the internet etc. Next, fertile agricultural land is preserved to maintain food self-supplied goal for national security concern. More adoption-based methods than mitigation-based methods have been applied to cope with global climate change. Finally, better land use and transportation planning in terms of avoiding developing rail transit stations and corridor in rural area is promoted. Even though many promising, prompt measures have been adopted, however, challenges exist to surround: first, overall urban density, likely affecting success of UGB, or use of rural agricultural land, has not been incorporated, possibly due to implementation difficulties. Second, land-use related measures to mitigating climate change seem less clear and hence less employed. Smart decline has not drawn enough attention to cope with predicted population decrease in the next decade. Then, some reluctance from county’s government to implement county regional plan can be observed vaguely possibly since limits have be set on further development on agricultural land and sensitive areas. Finally, resolving issue on existing illegal factories on agricultural land remains the most challenging dilemma.Keywords: city region plan, sustainability, global climate change, growth management
Procedia PDF Downloads 349355 The Senior Traveler Market as a Competitive Advantage for the Luxury Hotel Sector in the UK Post-Pandemic
Authors: Feyi Olorunshola
Abstract:
Over the last few years, the senior travel market has been noted for its potential in the wider tourism industry. The tourism sector includes the hotel and hospitality, travel, transportation, and several other subdivisions to make it economically viable. In particular, the hotel attracts a substantial part of the expenditure in tourism activities as when people plan to travel, suitable accommodation for relaxation, dining, entertainment and so on is paramount to their decision-making. The global retail value of the hotel as of 2018 was significant for tourism. But, despite indications of the hotel to the tourism industry at large, very few empirical studies are available to establish how this sector can leverage on the senior demographic to achieve competitive advantage. Predominantly, studies on the mature market have focused on destination tourism, with a limited investigation on the hotel which makes a significant contribution to tourism. Also, several scholarly studies have demonstrated the importance of the senior travel market to the hotel, yet there is very little empirical research in the field which has explored the driving factors that will become the accepted new normal for this niche segment post-pandemic. Giving that the hotel already operates in a highly saturated business environment, and on top of this pre-existing challenge, the ongoing global health outbreak has further put the sector in a vulnerable position. Therefore, the hotel especially the full-service luxury category must evolve rapidly for it to survive in the current business environment. The hotel can no longer rely on corporate travelers to generate higher revenue since the unprecedented wake of the pandemic in 2020 many organizations have invented a different approach of conducting their businesses online, therefore, the hotel needs to anticipate a significant drop in business travellers. However, the rooms and the rest of the facilities must be occupied to keep their business operating. The way forward for the hotel lies in the leisure sector, but the question now is to focus on the potential demographics of travelers, in this case, the seniors who have been repeatedly recognized as the lucrative market because of increase discretionary income, availability of time and the global population trends. To achieve the study objectives, a mixed-method approach will be utilized drawing on both qualitative (netnography) and quantitative (survey) methods, cognitive and decision-making theories (means-end chain) and competitive theories to identify the salient drivers explaining senior hotel choice and its influence on their decision-making. The target population are repeated seniors’ age 65 years and over who are UK resident, and from the top tourist market to the UK (USA, Germany, and France). Structural equation modelling will be employed to analyze the datasets. The theoretical implication is the development of new concepts using a robust research design, and as well as advancing existing framework to hotel study. Practically, it will provide the hotel management with the latest information to design a competitive marketing strategy and activities to target the mature market post-pandemic and over a long period.Keywords: competitive advantage, covid-19, full-service hotel, five-star, luxury hotels
Procedia PDF Downloads 122354 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 444353 Phenolic Acids of Plant Origin as Promising Compounds for Elaboration of Antiviral Drugs against Influenza
Authors: Vladimir Berezin, Aizhan Turmagambetova, Andrey Bogoyavlenskiy, Pavel Alexyuk, Madina Alexyuk, Irina Zaitceva, Nadezhda Sokolova
Abstract:
Introduction: Influenza viruses could infect approximately 5% to 10% of the global human population annually, resulting in serious social and economic damage. Vaccination and etiotropic antiviral drugs are used for the prevention and treatment of influenza. Vaccination is important; however, antiviral drugs represent the second line of defense against new emerging influenza virus strains for which vaccines may be unsuccessful. However, the significant drawback of commercial synthetic anti-flu drugs is the appearance of drug-resistant influenza virus strains. Therefore, the search and development of new anti-flu drugs efficient against drug-resistant strains is an important medical problem for today. The aim of this work was a study of four phenolic acids of plant origin (Gallic, Syringic, Vanillic, and Protocatechuic acids) as a possible tool for treatment against influenza virus. Methods: Phenolic acids; gallic, syringic, vanillic, and protocatechuic have been prepared by extraction from plant tissues and purified using high-performance liquid chromatography fractionation. Avian influenza virus, strain A/Tern/South Africa/1/1961 (H5N3) and human epidemic influenza virus, strain A/Almaty/8/98 (H3N2) resistant to commercial anti-flu drugs (Rimantadine, Oseltamivir) were used for testing antiviral activity. Viruses were grown in the allantoic cavity of 10 days old chicken embryos. The chemotherapeutic index (CTI), determined as the ratio of an average toxic concentration of the tested compound (TC₅₀) to the average effective virus-inhibition concentration (EC₅₀), has been used as a criteria of specific antiviral action. Results: The results of study have shown that the structure of phenolic acids significantly affected their ability to suppress the reproduction of tested influenza virus strains. The highest antiviral activity among tested phenolic acids was detected for gallic acid, which contains three hydroxyl groups in the molecule at C3, C4, and C5 positions. Antiviral activity of gallic acid against A/H5N3 and A/H3N2 influenza virus strains was higher than antiviral activity of Oseltamivir and Rimantadine. gallic acid inhibited almost 100% of the infection activity of both tested viruses. Protocatechuic acid, which possesses 2 hydroxyl groups (C3 and C4) have shown weaker antiviral activity in comparison with gallic acid and inhibited less than 10% of virus infection activity. Syringic acid, which contains two hydroxyl groups (C3 and C5), was able to suppress up to 12% of infection activity. Substitution of two hydroxyl groups by methoxy groups resulted in the complete loss of antiviral activity. Vanillic acid, which is different from protocatechuic acid by replacing of C3 hydroxyl group to methoxy group, was able to suppress about 30% of infection activity of tested influenza viruses. Conclusion: For pronounced antiviral activity, the molecular of phenolic acid must have at least two hydroxyl groups. Replacement of hydroxyl groups to methoxy group leads to a reduction of antiviral properties. Gallic acid demonstrated high antiviral activity against influenza viruses, including Rimantadine and Oseltamivir resistant strains, and could be used as a potential candidate for the development of antiviral drug against influenza virus.Keywords: antiviral activity, influenza virus, drug resistance, phenolic acids
Procedia PDF Downloads 141352 Dimethyl fumarate Alleviates Valproic Acid-Induced Autism in Wistar Rats via Activating NRF-2 and Inhibiting NF-κB Pathways
Authors: Sandy Elsayed, Aya Mohamed, Noha Nassar
Abstract:
Introduction: Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by social deficits and repetitive behavior. Multiple studies suggest that oxidative stress and neuroinflammation are key factors in the etiology of ASD and often associated with worsening of ASD-related behaviors. Nuclear factor erythroid 2-related factor 2 (NRF-2) is a transcription factor that promotes expression of antioxidant response element genes in oxidative stress. In ASD subjects, decreased expression of NRF-2 in frontal cortex shifted the redox homeostasis towards oxidative stress, and resulted in inflammation evidenced by elevation of nuclear factor kappa B (NF-κB) transcriptional activity. Dimethyl fumarate (DMF) is a NRF-2 activator that is used in the treatment of psoriasis and multiple sclerosis. It participates in the transcriptional control of inflammatory factors via inhibition of NF-κB and its downstream targets. This study aimed to investigate the role of DMF in alleviating the cognitive impairments and behavior deficits associated with ASD through mitigation of oxidative stress and inflammation in prenatal valproic acid (VPA) rat model of autism. Methods: Pregnant female Wistar rats received a single intraperitoneal injection of VPA (600 mg/kg) to induce autistic-like-behavioral and neurobiological alterations in their offspring. Chronic oral gavage of DMF (150mg/kg/day) started from postnatal day (PND) 24 till PND62 (39 days). Prenatal VPA exposure elicited autistic behaviors including decreased social interaction and stereotyped behavior. Social interaction was evaluated using three-chamber sociability test and calculation of sociability index (SI), while stereotyped repetitive behavior and anxiety associated with ASD were assessed using marble burying test (MBT). Biochemical analyses were done on prefrontal cortex homogenates including NRF-2, and NF-κB expression. Moreover, inducible nitric oxide synthase (iNOS) gene expression and tumor necrosis factor (TNF-) protein expression were evaluated as markers of inflammation. Results: Prenatal VPA elicited decreased social interaction shown by decreased SI compared to control group (p < 0.001) and DMF enhanced SI (p < 0.05). In MBT, prenatal injection of VPA manifested stereotyped behavior and enhanced number of buried marbles compared to control (p < 0.05) and DMF reduced the anxiety-related behavior in rats exhibiting ASD-like behaviors (p < 0.05). In prefrontal cortex, NRF-2 expression was downregulated in prenatal VPA model (p < 0.0001) and DMF reversed this effect (p < 0.0001). The inflammatory transcription factor NF-κB was elevated in prenatal VPA model (p < 0.0001) and reduced (p < 0.0001) upon NRF-2 activation by DMF. Prenatal VPA expressed higher levels of proinflammatory cytokine TNF- compared to control group (p < 0.0001) and DMF reduced it (p < 0.0001). Finally, the gene expression of iNOS was downregulated upon NRF-2 activation by DMF (p < 0.01). Conclusion: This study proposes that DMF is a potential agent that can be used to ameliorate autistic-like-changes through NRF-2 activation along with NF-κB downregulation and therefore, it is a promising novel therapy for ASD.Keywords: autism spectrum disorders, dimethyl fumarate, neuroinflammation, NRF-2
Procedia PDF Downloads 41351 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 60350 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites
Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira
Abstract:
The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites
Procedia PDF Downloads 187349 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 274348 On the Limits of Board Diversity: Impact of Network Effect on Director Appointments
Authors: Vijay Marisetty, Poonam Singh
Abstract:
Research on the effect of director's network connections on investor welfare is inconclusive. Some studies suggest that directors' connections are beneficial, in terms of, improving earnings information, firms valuation for new investors. On the other hand, adverse effects of directorial networks are also reported, in terms of higher earnings management, options back dating fraud, reduction in firm performance, lower board monitoring. From regulatory perspective, the role of directorial networks on corporate welfare is crucial. Cognizant of the possible ill effects associated with directorial networks, large investors, for better representation on the boards, are building their own database of prospective directors who are highly qualified, however, sourced from outside the highly connected directorial labor market. For instance, following Dodd-Frank Reform Act, California Public Employees' Retirement Systems (CalPERs) has initiated a database for registering aspiring and highly qualified directors to nominate them for board seats (proxy access). Our paper stems from this background and tries to explore the chances of outside directors getting directorships who lack established network connections. The paper is able to identify such aspiring directors' information by accessing a unique Indian data sourced from an online portal that aims to match the supply of registered aspirants with the growing demand for outside directors in India. The online portal's tie-up with stock exchanges ensures firms to access the new pool of directors. Such direct access to the background details of aspiring directors over a period of 10 years, allows us to examine the chances of aspiring directors without corporate network, to enter directorial network. Using this resume data of 16105 aspiring corporate directors in India, who have no prior board experience in the directorial labor market, the paper analyses the entry dynamics in corporate directors' labor market. The database also allows us to investigate the value of corporate network by comparing non-network new entrants with incumbent networked directors. The study develops measures of network centrality and network degree based on merit, i.e. network of individuals belonging to elite educational institutions, like Indian Institute of Management (IIM) or Indian Institute of Technology (IIT) and based on job or company, i.e. network of individuals serving in the same company. The paper then measures the impact of these networks on the appointment of first time directors and subsequent appointment of directors. The paper reports the following main results: 1. The likelihood of becoming a corporate director, without corporate network strength, is only 1 out 100 aspirants. This is inspite of comparable educational background and similar duration of corporate experience; 2. Aspiring non-network directors' elite educational ties help them to secure directorships. However, for post-board appointments, their newly acquired corporate network strength overtakes as their main determinant for subsequent board appointments and compensation. The results thus highlight the limitations in increasing board diversity.Keywords: aspiring corporate directors, board diversity, director labor market, director networks
Procedia PDF Downloads 312347 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 91346 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake
Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil
Abstract:
The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.Keywords: Gorkha earthquake, reconstruction, challenges, policy
Procedia PDF Downloads 409345 Interdisciplinary Method Development - A Way to Realize the Full Potential of Textile Resources
Authors: Nynne Nørup, Julie Helles Eriksen, Rikke M. Moalem, Else Skjold
Abstract:
Despite a growing focus on the high environmental impact of textiles, textile waste is only recently considered as part of the waste field. Consequently, there is a general lack of knowledge and data within this field. Particularly the lack of a common perception of textiles generates several problems e.g., to recognize the full material potential the fraction contains, which is cruel if the textile must enter the circular economy. This study aims to qualify a method to make the resources in textile waste visible in a way that makes it possible to move them as high up in the waste hierarchy as possible. Textiles are complex and cover many different types of products, fibers and combinations of fibers and production methods. In garments alone, there is a great variety, even when narrowing it to only undergarments. However, textile waste is often reduced to one fraction, assessed solely by quantity, and compared to quantities of other waste fractions. Disregarding the complexity and reducing textiles to a single fraction that covers everything made of textiles increase the risk of neglecting the value of the materials, both with regards to their properties and economical. Instead of trying to fit textile waste into the current primarily linear waste system where volume is a key part of the business models, this study focused on integrating textile waste as a resource in the design and production phase. The study combined interdisciplinary methods for determining replacement rates used in Life Cycle Assessments and Mass Flow Analysis methods with the designer’s toolbox to hereby activate the properties of textile waste in a way that can unleash its potential optimally. It was hypothesized that by activating Denmark's tradition for design and high level of craftsmanship, it is possible to find solutions that can be used today and create circular resource models that reduce the use of virgin fibers. Through waste samples, case studies, and testing of various design approaches, this study explored how to functionalize the method so that the product after the end-use is kept as a material and only then processed at fiber level to obtain the best environmental utilization. The study showed that the designers' ability to decode the properties of the materials and understanding of craftsmanship were decisive for how well the materials could be utilized today. The later in the life cycle the textiles appeared as waste, the more demanding the description of the materials to be sufficient, especially if to achieve the best possible use of the resources and thus a higher replacement rate. In addition, it also required adaptation in relation to the current production because the materials often varied more. The study found good indications that part of the solution is to use geodata i.e., where in the life cycle the materials were discarded. An important conclusion is that a fully developed method can help support better utilization of textile resources. However, it stills requires a better understanding of materials by the designers, as well as structural changes in business and society.Keywords: circular economy, development of sustainable processes, environmental impacts, environmental management of textiles, environmental sustainability through textile recycling, interdisciplinary method development, resource optimization, recycled textile materials and the evaluation of recycling, sustainability and recycling opportunities in the textile and apparel sector
Procedia PDF Downloads 95344 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools
Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal
Abstract:
The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.Keywords: sustainability, electric island, IOT, smart building
Procedia PDF Downloads 179343 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 109342 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing
Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto
Abstract:
In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration
Procedia PDF Downloads 246341 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry
Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard
Abstract:
Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor
Procedia PDF Downloads 327340 The Analgesic Effect of Electroacupuncture in a Murine Fibromyalgia Model
Authors: Bernice Jeanne Lottering, Yi-Wen Lin
Abstract:
Introduction: Chronic pain has a definitive lack of objective parameters in the measurement and treatment efficacy of diseases such as Fibromyalgia (FM). Persistent widespread pain and generalized tenderness are the characteristic symptoms affecting a large majority of the global population, particularly females. This disease has indicated a refractory tendency to conventional treatment ventures, largely resultant from a lack of etiological and pathogenic understanding of the disease development. Emerging evidence indicates that the central nervous system (CNS) plays a critical role in the amplification of pain signals and the neurotransmitters associated therewith. Various stimuli have been found to activate the channels existent on nociceptor terminals, thereby actuating nociceptive impulses along the pain pathways. The transient receptor potential vanalloid 1 (TRPV1) channel functions as a molecular integrator for numerous sensory inputs, such as nociception, and was explored in the current study. Current intervention approaches face a multitude challenges, ranging from effective therapeutic interventions to the limitation of pathognomonic criteria resultant from incomplete understanding and partial evidence on the mechanisms of action of FM. It remains unclear whether electroacupuncture (EA) plays an integral role in the functioning of the TRPV1 pathway, and whether or not it can reduce the chronic pain induced by FM. Aims: The aim of this study was to explore the mechanisms underlying the activation and modulation of the TRPV1 channel pathway in a cold stress model of FM applied to a murine model. Furthermore, the effect of EA in the treatment of mechanical and thermal pain, as expressed in FM was also to be investigated. Methods: 18 C57BL/6 wild type and 6 TRPV1 knockout (KO) mice, aged 8-12 weeks, were exposed to an intermittent cold stress-induced fibromyalgia-like pain model, with or without EA treatment at ZusanLi ST36 (2Hz/20min) on day 3 to 5. Von Frey and Hargreaves behaviour tests were implemented in order to analyze the mechanical and thermal pain thresholds on day 0, 3 and 5 in control group (C), FM group (FM), FM mice with EA treated group (FM + EA) and FM in KO group. Results: An increase in mechanical and thermal hyperalgesia was observed in the FM, EA and KO groups when compared to the control group. This initial increase was reduced in the EA group, which directs focus at the treatment efficacy of EA in nociceptive sensitization, and the analgesic effect EA has attenuating FM associated pain. Discussion: An increase in the nociceptive sensitization was observed through higher withdrawal thresholds in the von Frey mechanical test and the Hargreaves thermal test. TRPV1 function in mice has been scientifically associated with these nociceptive conduits, and the increased behaviour test results suggest that TRPV1 upregulation is central to the FM induced hyperalgesia. This data was supported by the decrease in sensitivity observed in results of the TRPV1 KO group. Moreover, the treatment of EA showed a decrease in this FM induced nociceptive sensitization, suggesting TRPV1 upregulation and overexpression can be attenuated by EA at bilateral ST36. This evidence compellingly implies that the analgesic effect of EA is associated with TRPV1 downregulation.Keywords: fibromyalgia, electroacupuncture, TRPV1, nociception
Procedia PDF Downloads 140339 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade
Authors: Bukra Kalayci, Geraldine Brennan
Abstract:
Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination
Procedia PDF Downloads 204338 Experience in Caring for a Patient with Terminal Aortic Dissection of Lung Cancer and Paralysis of the Lower Limbs after Surgery
Authors: Pei-Shan Liang
Abstract:
Objective: This article explores the care experience of a terminal lung cancer patient who developed lower limb paralysis after surgery for aortic dissection. The patient, diagnosed with aortic dissection during chemotherapy for lung cancer, faced post-surgical lower limb paralysis, leading to feelings of helplessness and hopelessness as they approached death with reduced mobility. Methods: The nursing period was from July 19 to July 27, during which the author, alongside the intensive care team and palliative care specialists, conducted a comprehensive assessment through observation, direct care, conversations, physical assessments, and medical record review. Gordon's eleven functional health patterns were used for a holistic evaluation, identifying four nursing health issues: "pain related to terminal lung cancer and invasive procedures," "decreased cardiac tissue perfusion due to hemodynamic instability," "impaired physical mobility related to lower limb paralysis," and "hopelessness due to the unpredictable prognosis of terminal lung cancer." Results: The medical team initially focused on symptom relief, administering Morphine 5mg in 0.9% N/S 50ml IVD q6h for pain management and continuing chemotherapy as prescribed. Open communication was employed to address the patient's physical, psychological, and spiritual concerns. Non-pharmacological interventions, including listening, caring, companionship, opioid medication, and distraction techniques like comfortable positioning and warm foot baths, were used to alleviate pain, reducing the pain score to 3 on the numeric rating scale and easing respiratory discomfort. The palliative care team was also involved, guiding the patient and family through the "Four Paths of Life," helping the patient achieve a good end-of-life experience and the family to experience a peaceful life. This process also served to promote the concept of palliative care, enabling more patients and families to receive high-quality and dignified care. The patient was encouraged to express inner anxiety through drawing or writing, which helped reduce the hopelessness caused by psychological distress and uncertainty about the disease's prognosis, as assessed by the Hospital Anxiety and Depression Scale, reaching a level of mild anxiety but acceptable without affecting sleep. Conclusion: What left a deep impression during the care process was the need for intensive care providers to consider the patient's psychological state, not just their physical condition, when the patient's situation changes. Family support and involvement often provide the greatest solace for the patient, emphasizing the importance of comfort and dignity. This includes oral care to maintain cleanliness and comfort, frequent repositioning to alleviate pressure and discomfort, and timely removal of invasive devices and unnecessary medications to avoid unnecessary suffering. The nursing process should also address the patient's psychological needs, offering comfort and support to ensure that they can face the end of life with peace and dignity.Keywords: intensive care, lung cancer, aortic dissection, lower limb paralysis
Procedia PDF Downloads 26337 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania
Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea
Abstract:
A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality
Procedia PDF Downloads 128336 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line
Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan
Abstract:
Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA
Procedia PDF Downloads 135335 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake
Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel
Abstract:
Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building
Procedia PDF Downloads 151334 Pivoting to Fortify our Digital Self: Revealing the Need for Personal Cyber Insurance
Authors: Richard McGregor, Carmen Reaiche, Stephen Boyle
Abstract:
Cyber threats are a relatively recent phenomenon and offer cyber insurers a dynamic and intelligent peril. As individuals en mass become increasingly digitally dependent, Personal Cyber Insurance (PCI) offers an attractive option to mitigate cyber risk at a personal level. This abstract proposes a literature review that conceptualises a framework for siting Personal Cyber Insurance (PCI) within the context of cyberspace. The lack of empirical research within this domain demonstrates an immediate need to define the scope of PCI to allow cyber insurers to understand personal cyber risk threats and vectors, customer awareness, capabilities, and their associated needs. Additionally, this will allow cyber insurers to conceptualise appropriate frameworks allowing effective management and distribution of PCI products and services within a landscape often in-congruent with risk attributes commonly associated with traditional personal line insurance products. Cyberspace has provided significant improvement to the quality of social connectivity and productivity during past decades and allowed enormous capability uplift of information sharing and communication between people and communities. Conversely, personal digital dependency furnish ample opportunities for adverse cyber events such as data breaches and cyber-attacksthus introducing a continuous and insidious threat of omnipresent cyber risk–particularly since the advent of the COVID-19 pandemic and wide-spread adoption of ‘work-from-home’ practices. Recognition of escalating inter-dependencies, vulnerabilities and inadequate personal cyber behaviours have prompted efforts by businesses and individuals alike to investigate strategies and tactics to mitigate cyber risk – of which cyber insurance is a viable, cost-effective option. It is argued that, ceteris parabus, the nature of cyberspace intrinsically provides characteristic peculiarities that pose significant and bespoke challenges to cyber insurers, often in-congruent with risk attributes commonly associated with traditional personal line insurance products. These challenges include (inter alia) a paucity of historical claim/loss data for underwriting and pricing purposes, interdependencies of cyber architecture promoting high correlation of cyber risk, difficulties in evaluating cyber risk, intangibility of risk assets (such as data, reputation), lack of standardisation across the industry, high and undetermined tail risks, and moral hazard among others. This study proposes a thematic overview of the literature deemed necessary to conceptualise the challenges to issuing personal cyber coverage. There is an evident absence of empirical research appertaining to PCI and the design of operational business models for this business domain, especially qualitative initiatives that (1) attempt to define the scope of the peril, (2) secure an understanding of the needs of both cyber insurer and customer, and (3) to identify elements pivotal to effective management and profitable distribution of PCI - leading to an argument proposed by the author that postulates that the traditional general insurance customer journey and business model are ill-suited for the lineaments of cyberspace. The findings of the review confirm significant gaps in contemporary research within the domain of personal cyber insurance.Keywords: cyberspace, personal cyber risk, personal cyber insurance, customer journey, business model
Procedia PDF Downloads 103333 Experiences of Discrimination and Coping Strategies of Second Generation Academics during the Career-Entry Phase in Austria
Authors: R. Verwiebe, L. Seewann, M. Wolf
Abstract:
This presentation addresses marginalization and discrimination as experienced by young academics with a migrant background in the Austrian labor market. Focusing on second generation academics of Central Eastern European and Turkish descent we explore two major issues. First, we ask whether their career-entry and everyday professional life entails origin-specific barriers. As educational residents, they show competences which, when lacking, tend to be drawn upon to explain discrimination: excellent linguistic skills, accredited high-level training, and networks. Second, we concentrate on how this group reacts to discrimination and overcomes experiences of marginalization. To answer these questions, we utilize recent sociological and social psychological theories that focus on the diversity of individual experiences. This distinguishes us from a long tradition of research that has dealt with the motives that inform discrimination, but has less often considered the effects on those concerned. Similarly, applied coping strategies have less often been investigated, though they may provide unique insights into current problematic issues. Building upon present literature, we follow recent discrimination research incorporating the concepts of ‘multiple discrimination’, ‘subtle discrimination’, and ‘visual social markers’. 21 problem-centered interviews are the empirical foundation underlying this study. The interviewees completed their entire educational career in Austria, graduated in different universities and disciplines and are working in their first post-graduate jobs (career entry phase). In our analysis, we combined thematic charting with a coding method. The results emanating from our empirical material indicated a variety of discrimination experiences ranging from barely perceptible disadvantages to directly articulated and overt marginalization. The spectrum of experiences covered stereotypical suppositions at job interviews, the disavowal of competencies, symbolic or social exclusion by new colleges, restricted professional participation (e.g. customer contact) and non-recruitment due to religious or ethnical markers (e.g. headscarves). In these experiences the role of the academics education level, networks, or competences seemed to be minimal, as negative prejudice on the basis of visible ‘social markers’ operated ‘ex-ante’. The coping strategies identified in overcoming such barriers are: an increased emphasis on effort, avoidance of potentially marginalizing situations, direct resistance (mostly in the form of verbal opposition) and dismissal of negative experiences by ignoring or ironizing the situation. In some cases, the academics drew into their specific competences, such as an intellectual approach of studying specialist literature, focus on their intercultural competences or planning to migrate back to their parent’s country of origin. Our analysis further suggests a distinction between reactive (i.e. to act on and respond to experienced discrimination) and preventative strategies (applied to obviate discrimination) of coping. In light of our results, we would like to stress that the tension between educational and professional success experienced by academics with a migrant background – and the barriers and marginalization they continue to face – are essential issues to be introduced to socio-political discourse. It seems imperative to publicly accentuate the growing social, political and economic significance of this group, their educational aspirations, as well as their experiences of achievement and difficulties.Keywords: coping strategies, discrimination, labor market, second generation university graduates
Procedia PDF Downloads 221