Search results for: process innovation performance
900 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10
Authors: Sofia Papadimitriou
Abstract:
INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score
Procedia PDF Downloads 155899 Earthquake Preparedness of School Community and E-PreS Project
Authors: A. Kourou, A. Ioakeimidou, S. Hadjiefthymiades, V. Abramea
Abstract:
During the last decades, the task of engaging governments, communities and citizens to reduce risk and vulnerability of the populations has made variable progress. Experience has demonstrated that lack of awareness, education and preparedness may result in significant material and other losses both on the onset of the disaster. Schools play a vital role in the community and are important elements of values and culture of the society. A proper school education not only teaches children, but also is a key factor in the promotion of a safety culture into the wider community. In Greece School Earthquake Safety Initiative has been undertaken by Earthquake Planning and Protection Ogranization with specific actions (seminars, lectures, guidelines, educational material, campaigns, national or EU projects, drills etc.). The objective of this initiative is to develop disaster-resilient school communities through awareness, self-help, cooperation and education. School preparedness requires the participation of Principals, teachers, students, parents, and competent authorities. Preparation and earthquake readiness involves: a) learning what should be done before, during, and after earthquake; b) doing or preparing to do these things now, before the next earthquake; and c) developing teachers’ and students’ skills to cope efficiently in case of an earthquake. In the above given framework this paper presents the results of a survey aimed to identify the level of education and preparedness of school community in Greece. More specifically, the survey questionnaire investigates issues regarding earthquake protection actions, appropriate attitudes and behaviors during an earthquake and existence of contingency plans at elementary and secondary schools. The questionnaires were administered to Principals and teachers from different regions of the country that attend the EPPO national training project 'Earthquake Safety at Schools'. A closed-form questionnaire was developed for the survey, which contained questions regarding the following: a) knowledge of self protective actions b) existence of emergency planning at home and c) existence of emergency planning at school (hazard mitigation actions, evacuation plan, and performance of drills). Survey results revealed that a high percentage of teachers have taken the appropriate preparedness measures concerning non-structural hazards at schools, emergency school plan and simulation drills every year. In order to improve the action-planning for ongoing school disaster risk reduction, the implementation of earthquake drills, the involvement of students with disabilities and the evaluation of school emergency plans, EPPO participates in E-PreS project. The main objective of this project is to create smart tools which define, simulate and evaluate all hazards emergency steps customized to the unique district and school. The project comes up with a holistic methodology using real-time evaluation involving different categories of actors, districts, steps and metrics. The project is supported by EU Civil Protection Financial Instrument with a duration of two years. Coordinator is the Kapodistrian University of Athens and partners are from four countries; Greece, Italy, Romania and Bulgaria.Keywords: drills, earthquake, emergency plans, E-PreS project
Procedia PDF Downloads 235898 Effects of Glucogenic and Lipogenic Diets on Ruminal Microbiota and Metabolites in Vitro
Authors: Beihai Xiong, Dengke Hua, Wouter Hendriks, Wilbert Pellikaan
Abstract:
To improve the energy status of dairy cows in the early lactation, lots of jobs have been done on adjusting the starch to fiber ratio in the diet. As a complex ecosystem, the rumen contains a large population of microorganisms which plays a crucial role in feed degradation. Further study on the microbiota alterations and metabolic changes under different dietary energy sources is essential and valuable to better understand the function of the ruminal microorganisms and thereby to optimize the rumen function and enlarge feed efficiency. The present study will focus on the effects of two glucogenic diets (G: ground corn and corn silage; S: steam-flaked corn and corn silage) and a lipogenic diet (L: sugar beet pulp and alfalfa silage) on rumen fermentation, gas production, the ruminal microbiota and metabolome, and also their correlations in vitro. The gas production was recorded consistently, and the gas volume and producing rate at times 6, 12, 24, 48 h were calculated separately. The fermentation end-products were measured after fermenting for 48 h. The ruminal bacteria and archaea communities were determined by 16S RNA sequencing technique, the metabolome profile was tested through LC-MS methods. Compared to the diet G and S, the L diet had a lower dry matter digestibility, propionate production, and ammonia-nitrogen concentration. The two glucogenic diets performed worse in controlling methane and lactic acid production compared to the L diet. The S diet produced the greatest cumulative gas volume at any time points during incubation compared to the G and L diet. The metabolic analysis revealed that the lipid digestion was up-regulated by the diet L than other diets. On the subclass level, most metabolites belonging to the fatty acids and conjugates were higher, but most metabolites belonging to the amino acid, peptides, and analogs were lower in diet L than others. Differences in rumen fermentation characteristics were associated with (or resulting from) changes in the relative abundance of bacterial and archaeal genera. Most highly abundant bacteria were stable or slightly influenced by diets, while several amylolytic and cellulolytic bacteria were sensitive to the dietary changes. The L diet had a significantly higher number of cellulolytic bacteria, including the genera of Ruminococcus, Butyrivibrio, Eubacterium, Lachnospira, unclassified Lachnospiraceae, and unclassified Ruminococcaceae. The relative abundances of amylolytic bacteria genera including Selenomonas_1, Ruminobacter, and Succinivibrionaceae_UCG-002 were higher in diet G and S. These affected bacteria was also proved to have high associations with certain metabolites. The Selenomonas_1 and Succinivibrionaceae_UCG-002 may contribute to the higher propionate production in the diet G and S through enhancing the succinate pathway. The results indicated that the two glucogenic diets had a greater extent of gas production, a higher dry matter digestibility, and produced more propionate than diet L. The steam-flaked corn did not show a better performance on fermentation end-products than ground corn. This study has offered a deeper understanding of ruminal microbial functions which could assistant the improvement in rumen functions and thereby in the ruminant production.Keywords: gas production, metabolome, microbiota, rumen fermentation
Procedia PDF Downloads 153897 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 122896 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 84895 Accelerating Malaysian Technology Startups: Case Study of Malaysian Technology Development Corporation as the Innovator
Authors: Norhalim Yunus, Mohamad Husaini Dahalan, Nor Halina Ghazali
Abstract:
Building technology start-ups from ground zero into world-class companies in form and substance present a rare opportunity for government-affiliated institutions in Malaysia. The challenge of building such start-ups becomes tougher when their core businesses involve commercialization of unproven technologies for the mass market. These simple truths, while difficult to execute, will go a long way in getting a business off the ground and flying high. Malaysian Technology Development Corporation (MTDC), a company founded to facilitate the commercial exploitation of R&D findings from research institutions and universities, and eventually help translate these findings of applications in the marketplace, is an excellent case in point. The purpose of this paper is to examine MTDC as an institution as it explores the concept of ‘it takes a village to raise a child’ in an effort to create and nurture start-ups into established world class Malaysian technology companies. With MTDC at the centre of Malaysia's innovative start-ups, the analysis seeks to specifically answer two questions: How has the concept been applied in MTDC? and what can we learn from this successful case? A key aim is to elucidate how MTDC's journey as a private limited company can help leverage reforms and achieve transformation, a process that might be suitable for other small, open, third world and developing countries. This paper employs a single case study, designed to acquire an in-depth understanding of how MTDC has developed and grown technology start-ups to world-class technology companies. The case study methodology is employed as the focus is on a contemporary phenomenon within a real business context. It also explains the causal links in real-life situations where a single survey or experiment is unable to unearth. The findings show that MTDC maximises the concept of it needs a village to raise a child in totality, as MTDC itself assumes the role of the innovator to 'raise' start-up companies into world-class stature. As the innovator, MTDC creates shared value and leadership, introduces innovative programmes ahead of the curve, mobilises talents for optimum results and aggregates knowledge for personnel advancement. The success of the company's effort is attributed largely to leadership, visionary, adaptability, commitment to innovate, partnership and networking, and entrepreneurial drive. The findings of this paper are however limited by the single case study of MTDC. Future research is required to study more cases of success or/and failure where the concept of it takes a village to raise a child have been explored and applied.Keywords: start-ups, technology transfer, commercialization, technology incubator
Procedia PDF Downloads 150894 Improvement of Activity of β-galactosidase from Kluyveromyces lactis via Immobilization on Polyethylenimine-Chitosan
Authors: Carlos A. C. G. Neto, Natan C. G. e Silva , Thaís de O. Costa, Luciana R. B. Gonçalves, Maria V. P. Rocha
Abstract:
β-galactosidases (E.C. 3.2.1.23) are enzymes that have attracted by catalyzing the hydrolysis of lactose and in producing galacto-oligosaccharides by favoring transgalactosylation reactions. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers is a promising alternative to enhance the stability of the biocatalysts, among which polyethylenimine (PEI) stands out. PEI has certain properties, such as being a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases. Besides that, protects them from environmental variations. The use of chitosan support coated with PEI could improve the catalytic efficiency of β-galactosidase from Kluyveromyces lactis in the transgalactosylation reaction for the production of prebiotics, such as lactulose since this strain is more effective in the hydrolysis reaction. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from K. lactis immobilized on chitosan-coated with PEI, determining the immobilization parameters, its operational and thermal stability, and then to apply it in hydrolysis and transgalactolisation reactions to produce lactulose using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% (v/v) glutaraldehyde and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg protein per gram support. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey supplemented with fructose at a ratio of 1:2 lactose/fructose, totaling 200 g/L. Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6 with 0.1 mM MnCl2. The biocatalyst whose support was coated was named CHI_GLU_PEI_GAL, and the one that was not coated was named CHI_GLU_GAL. The coating of the support with PEI considerably improved the parameters of immobilization. The immobilization yield increased from 56.53% to 97.45%, biocatalyst activity from 38.93 U/g to 95.26 U/g and the efficiency from 3.51% to 6.0% for uncoated and coated support, respectively. The biocatalyst CHI_GLU_PEI_GAL was better than CHI_GLU_GAL in the hydrolysis of lactose and production of lactulose, converting 97.05% of lactose at 5 min of reaction and producing 7.60 g/L lactulose in the same time interval. QUI_GLU_PEI_GAL biocatalyst was stable in the hydrolysis reactions of lactose during the 10 cycles evaluated, converting 73.45% lactose even after the tenth cycle, and in the lactulose production was stable until the fifth cycle evaluated, producing 10.95 g/L lactulose. However, the thermal stability of CHI_GLU_GAL biocatalyst was superior, with a half-life time 6 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (CHI_GLU_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as, the catalytic action of the enzyme. Besides that, this process can be economically viable due to the use of an industrial residue as a substrate.Keywords: β-galactosidase, immobilization, kluyveromyces lactis, lactulose, polyethylenimine, transgalactosylation reaction, whey
Procedia PDF Downloads 111893 Development of Bilayer Coating System for Mitigating Corrosion of Offshore Wind Turbines
Authors: Adamantini Loukodimou, David Weston, Shiladitya Paul
Abstract:
Offshore structures are subjected to harsh environments. It is documented that carbon steel needs protection from corrosion. The combined effect of UV radiation, seawater splash, and fluctuating temperatures diminish the integrity of these structures. In addition, the possibility of damage caused by floating ice, seaborne debris, and maintenance boats make them even more vulnerable. Their inspection and maintenance when far out in the sea are difficult, risky, and expensive. The most known method of mitigating corrosion of offshore structures is the use of cathodic protection. There are several zones in an offshore wind turbine. In the atmospheric zone, due to the lack of a continuous electrolyte (seawater) layer between the structure and the anode at all times, this method proves inefficient. Thus, the use of protective coatings becomes indispensable. This research focuses on the atmospheric zone. The conversion of commercially available and conventional paint (epoxy) system to an autonomous self-healing paint system via the addition of suitable encapsulated healing agents and catalyst is investigated in this work. These coating systems, which can self-heal when damaged, can provide a cost-effective engineering solution to corrosion and related problems. When the damage of the paint coating occurs, the microcapsules are designed to rupture and release the self-healing liquid (monomer), which then will react in the presence of the catalyst and solidify (polymerization), resulting in healing. The catalyst should be compatible with the system because otherwise, the self-healing process will not occur. The carbon steel substrate will be exposed to a corrosive environment, so the use of a sacrificial layer of Zn is also investigated. More specifically, the first layer of this new coating system will be TSZA (Thermally Sprayed Zn85/Al15) and will be applied on carbon steel samples with dimensions 100 x 150 mm after being blasted with alumina (size F24) as part of the surface preparation. Based on the literature, it corrodes readily, so one additional paint layer enriched with microcapsules will be added. Also, the reaction and the curing time are of high importance in order for this bilayer system of coating to work successfully. For the first experiments, polystyrene microcapsules loaded with 3-octanoyltio-1-propyltriethoxysilane were conducted. Electrochemical experiments such as Electrochemical Impedance Spectroscopy (EIS) confirmed the corrosion inhibiting properties of the silane. The diameter of the microcapsules was about 150-200 microns. Further experiments were conducted with different reagents and methods in order to obtain diameters of about 50 microns, and their self-healing properties were tested in synthetic seawater using electrochemical techniques. The use of combined paint/electrodeposited coatings allows for further novel development of composite coating systems. The potential for the application of these coatings in offshore structures will be discussed.Keywords: corrosion mitigation, microcapsules, offshore wind turbines, self-healing
Procedia PDF Downloads 114892 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 312891 Service Quality, Skier Satisfaction, and Behavioral Intentions in Leisure Skiing: The Case of Beijing
Authors: Shunhong Qi, Hui Tian
Abstract:
Triggered off by the forthcoming 2022 Winter Olympics, ski centers are blossoming in China, the number being 742 in 2018. Although the number of skier visits of ski resorts soared to 19.7 million in 2018, one-time skiers account for a considerable portion therein. In light of the extremely low return rates and skiing penetration level (0.5%) of leisure skiing in China, this study proposes and tests a leisure ski service performance framework which assesses the ski resorts’ service quality, skier satisfaction, as well as their impact on skiers’ behavioral intentions, with an aim to assess the success of ski resorts and provide suggestions for improvement. Three self-administered surveys and 16 interviews were conducted upon a convenience sample of leisure skiers in two major ski destinations within two hours’ drive from Beijing – Nanshan and Jundushan ski resorts. Of the 680 questionnaires distributed, 416 usable copies were returned, the response rate being 61.2%. The questionnaire used for the study was developed based on the existing literature of 'push' factors of skiers (intrinsic desire) and 'pull' factors (attractiveness of a destination), as well as leisure sport satisfaction. The scale comprises four parts: skiers’ demographic profiles, their perceived service quality (including ski resorts’ infrastructure, expense, safety and comfort, convenience, daily needs support, skill development support, and accessibility), their overall levels of satisfaction (satisfaction with the service and the experience), and their behavioral intentions (including loyalty, future visitation and greater tolerance of price increases). Skiers’ demographic profiles show that among the 220 males and 196 females in the survey, a vast majority of the skiers are age 17-39 (87.2%). 64.7% are not married, and nearly half (48.3%) of the skiers have a monthly family income exceeding 10,000 yuan (USD 1,424), and 80% are beginners or intermediate skiers. The regression examining the influence of service quality on skier satisfaction reveals that service quality accounts for 44.4% of the variance in skier satisfaction, the variables of safety and comfort, expense, skill development support, and accessibility contributing significantly in descending order. Another regression analyzing the influence of service quality as well as skier satisfaction on their behavioral intentions shows that service quality and skier satisfaction account for 39.1% of the variance in skiers’ behavioral intentions, and the significant predictors are skier satisfaction, safety and comfort, expense, and accessibility, in descending order, though a comparison between groups also indicates that for expert skiers, the significant variables are skier satisfaction, skill development support, safety, and comfort. Suggestions are thus made for ski resorts and other stakeholders to improve skier satisfaction and increase visitation: developing diversified ski courses to meet the demands of skiers of different skiing skills and to reduce crowding, adopting enough chairlifts and magic carpets, reinforcing safety measures and medical force; further exploring their various resources and lower the skiing expense on ski pass, equipment renting, accommodation and dining; adding more bus lines and/or develop platforms for skiers’ car-pooling, and offering diversified skiing activities with local flavors for better entertainment.Keywords: behavioral intentions, leisure skiing, service quality, skier satisfaction
Procedia PDF Downloads 89890 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends
Authors: Reena Aggarwal, Neha Kestwal
Abstract:
The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.Keywords: Himalayan nettle, sustainable, shrinkage, blending
Procedia PDF Downloads 240889 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks
Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry
Abstract:
Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 303888 On-Farm Mechanized Conservation Agriculture: Preliminary Agro-Economic Performance Difference between Disc Harrowing, Ripping and No-Till
Authors: Godfrey Omulo, Regina Birner, Karlheinz Koller, Thomas Daum
Abstract:
Conservation agriculture (CA) as a climate-resilient and sustainable practice have been carried out for over three decades in Zambia. However, its continued promotion and adoption has been predominantly on a small-scale basis. Despite the plethora of scholarship pointing to the positive benefits of CA in regard to enhanced yield, profitability, carbon sequestration and minimal environmental degradation, these have not stimulated commensurate agricultural extensification desired for Zambia. The objective of this study was to investigate the potential differences between mechanized conventional and conservation tillage practices on operation time, fuel consumption, labor costs, soil moisture retention, soil temperature and crop yield. An on-farm mechanized conservation agriculture (MCA) experiment arranged in a randomized complete block design with four replications was used. The research was conducted on a 15 ha of sandy loam rainfed land: soybeans on 7ha with plot dimensions of 24 m by 210 m and maize on 8ha with plot dimensions of 24 m by 250 m. The three tillage treatments were: residue burning followed by disc harrowing, ripping tillage and no-till. The crops were rotated in two subsequent seasons. All operations were done using a 60hp 2-wheel tractor, a disc harrow, a two-tine ripper and a two-row planter. Soil measurements and the agro-economic factors were recorded for two farming seasons. The season results showed that the yield of maize and soybeans under no-till and ripping tillage practices were not significantly different from the conventional burning and discing. But, there was a significant difference in soil moisture content between no-till (25.31SFU±2.77) and disced (11.91SFU±0.59) plots at depths from 10-60 cm. Soil temperature in no-till plots (24.59°C±0.91) was significantly lower compared to the disced plots (26.20°C±1.75) at the depths 15 cm and 45 cm. For maize, there was a significant difference in operation time between disc-harrowed (3.68hr/ha±1.27) and no-till (1.85hr/ha±0.04) plots, and a significant difference in cost of labor between disc-harrowed (45.45$/ha±19.56) and no-till (21.76$/ha) plots. There was no significant difference in fuel consumption between ripping and disc-harrowing and direct seeding. For soybeans, there was a significant difference in operation time between no-tillage (1.96hr/ha±0.31) and ripping (3.34hr/ha±0.53) and disc harrowing (3.30hr/ha±0.16). Further, fuel consumption and labor on no-till plots were significantly different from both the ripped and disc-harrowed plots. The high seed emergence percentage on maize disc-harrowed plot (93.75%±5.87) was not significantly different from ripping and no-till plots. Again, the high seed emergence percentage for the soybean ripped plot (93.75%±13.03) had no significant difference with discing and ripping. The results show that it is economically sound and timesaving to practice MCA and get viable yields compared to conventional farming. This research fills the gap on the potential of MCA in the context of Zambia and its profitability in incentivizing policymakers to invest in appropriate and sustainable machinery and implements for extensive agricultural production.Keywords: climate-smart agriculture, labor cost, mechanized conservation agriculture, soil moisture, Zambia
Procedia PDF Downloads 148887 Variation of Lexical Choice and Changing Need of Identity Expression
Authors: Thapasya J., Rajesh Kumar
Abstract:
Language plays complex roles in society. The previous studies on language and society explain their interconnected, complementary and complex interactions and, those studies were primarily focused on the variations in the language. Variation being the fundamental nature of languages, the question of personal and social identity navigated through language variation and established that there is an interconnection between language variation and identity. This paper analyses the sociolinguistic variation in language at the lexical level and how the lexical choice of the speaker(s) affects in shaping their identity. It obtains primary data from the lexicon of the Mappila dialect of Malayalam spoken by the members of Mappila (Muslim) community of Kerala. The variation in the lexical choice is analysed by collecting data from the speech samples of 15 minutes from four different age groups of Mappila dialect speakers. Various contexts were analysed and the frequency of borrowed words in each instance is calculated to reach a conclusion on how the variation is happening in the speech community. The paper shows how the lexical choice of the speakers could be socially motivated and involve in shaping and changing identities. Lexical items or vocabulary clearly signal the group identity and personal identity. Mappila dialect of Malayalam was rich in frequent use of borrowed words from Arabic, Persian and Urdu. There was a deliberate attempt to show their identity as a Mappila community member, which was derived from the socio-political situation during those days. This made a clear variation between the Mappila dialect and other dialects of Malayalam at the surface level, which was motivated to create and establish the identity of a person as the member of Mappila community. Historically, these kinds of linguistic variation were highly motivated because of the socio-political factors and, intertwined with the historical facts about the origin and spread of Islamism in the region; people from the Mappila community highly motivated to project their identity as a Mappila because of the social insecurities they had to face before accepting that religion. Thus the deliberate inclusion of Arabic, Persian and Urdu words in their speech helped in showing their identity. However, the socio-political situations and factors at the origin of Mappila community have been changed over a period of time. The social motivation for indicating their identity as a Mappila no longer exist and thus the frequency of borrowed words from Arabic, Persian and Urdu have been reduced from their speech. Apart from the religious terms, the borrowed words from these languages are very few at present. The analysis is carried out by the changes in the language of the people according to their age and found to have significant variations between generations and literacy plays a major role in this variation process. The need of projecting a specific identity of an individual would vary according to the change in the socio-political scenario and a variation in language can shape the identity in order to go with the varying socio-political situation in any language.Keywords: borrowings, dialect, identity, lexical choice, literacy, variation
Procedia PDF Downloads 237886 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts
Authors: Xinyue Jiao, Yu-Ren Lin
Abstract:
Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.Keywords: argumentation, collaborative learning, scientific concepts, web-based learning
Procedia PDF Downloads 104885 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia
Authors: Marwa Djebbi, Hakim Gabtni
Abstract:
Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation
Procedia PDF Downloads 298884 De-Pigmentary Effect of Ayurvedic Treatment on Hyper-Pigmentation of Skin Due to Chloroquine: A Case Report
Authors: Sunil Kumar, Rajesh Sharma
Abstract:
Toxic epidermal necrolysis, pruritis, rashes, lichen planus like eruption, hyper pigmentation of skin are rare toxic effects of choloroquine used over a long time. Skin and mucus membrane hyper pigmentation is generally of a bluish black or grayish color and irreversible after discontinuation of the drug. According to Ayurveda, Dushivisha is the name given to any poisonous substance which is not fully endowed with the qualities of poison by nature (i.e. it acts as an impoverished or weak poison) and because of its mild potency, it remains in the body for many years causing various symptoms, one among them being discoloration of skin.The objective of this case report is to investigate the effect of Ayurvedic management of chloroquine induced hyper-pigmentation on the line of treatment of Dushivisha. Case Report: A 26-year-old female was suffering from hyper-pigmentation of the skin over the neck, forehead, temporo-mandibular joints, upper back and posterior aspect of both the arms since 8 years had history of taking Chloroquine came to Out Patient Department of National Institute of Ayurveda, Jaipur, India in Jan. 2015. The routine investigations (CBC, ESR, Eosinophil count) were within normal limits. Punch biopsy skin studied for histopathology under hematoxylin and eosin staining showed epidermis with hyper-pigmentation of the basal layer. In the papillary dermis as well as deep dermis there were scattered melanophages along with infiltration by mononuclear cells. There was no deposition of amyloid-like substances. These histopathological findings were suggestive of Chloroquine induced hyper-pigmentation. The case was treated on the line of treatment of Dushivisha and was given Vamana and Virechana (therapeutic emesis and purgation) every six months followed by Snehana karma (oleation therapy) with Panchatikta Ghrit and Swedana (sudation). Arogyavardhini Vati -1 g, Dushivishari Vati 500 mg, Mahamanjisthadi Quath 20 ml were given twelve hourly and Aragwadhadi Quath 25 ml at bed time orally. The patient started showing lightening of the pigments after six months and almost complete remission after 12 months of the treatment. Conclusion: This patient presented with the Dushivisha effect of Chloroquineandwas administered two relevant procedures from Panchakarma viz. Vamana and Virechana. Both Vamana and Virechanakarma here referred to Shodhana karma (purification procedures) eliminates accumulated toxins from the body. In this process, oleation dislodge the toxins from the tissues and sudation helps to bring them to the alimentary tract. The line of treatment did not target direct hypo pigmentary effects; rather aimed to eliminate the Dushivisha. This gave promising results in this condition.Keywords: Ayurveda, chloroquine, Dushivisha, hyper-pigmentation
Procedia PDF Downloads 234883 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective
Authors: Pardis Moslemzadeh Tehrani
Abstract:
Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.Keywords: blockchain, supply chain, IoT, smart contract
Procedia PDF Downloads 126882 The Lighthouse Project: Recent Initiatives to Navigate Australian Families Safely Through Parental Separation
Authors: Kathryn McMillan
Abstract:
A recent study of 8500 adult Australians aged 16 and over revealed 62% had experienced childhood maltreatment. In response to multiple recommendations by bodies such as the Australian Law Reform Commission, parliamentary reports and stakeholder input, a number of key initiatives have been developed to grapple with the difficulties of a federal-state system and to screen and triage high-risk families navigating their way through the court system. The Lighthouse Project (LHP) is a world-first initiative of the Federal Circuit and Family Courts in Australia (FCFOCA) to screen family law litigants for major risk factors, including family violence, child abuse, alcohol or substance abuse and mental ill-health at the point of filing in all applications that seek parenting orders. It commenced on 7 December 2020 on a pilot basis but has now been expanded to 15 registries across the country. A specialist risk screen, Family DOORS, Triage has been developed – focused on improving the safety and wellbeing of families involved in the family law system safety planning and service referral, and ¬ differentiated case management based on risk level, with the Evatt List specifically designed to manage the highest risk cases. Early signs are that this approach is meeting the needs of families with multiple risks moving through the Court system. Before the LHP, there was no data available about the prevalence of risk factors experienced by litigants entering the family courts and it was often assumed that it was the litigation process that was fueling family violence and other risks such as suicidality. Data from the 2022 FCFCOA annual report indicated that in parenting proceedings, 70% alleged a child had been or was at risk of abuse, 80% alleged a party had experienced Family Violence, 74 % of children had been exposed to Family Violence, 53% alleged through substance misuse by party children had caused or was at risk of causing harm to children and 58% of matters allege mental health issues of a party had caused or placed a child at risk of harm. Those figures reveal the significant overlap between child protection and family violence, both of which are under the responsibility of state and territory governments. Since 2020, a further key initiative has been the co-location of child protection and police officials amongst a number of registries of the FCFOCA. The ability to access in a time-effective way details of family violence or child protection orders, weapons licenses, criminal convictions or proceedings is key to managing issues across the state and federal divide. It ensures a more cohesive and effective response to family law, family violence and child protection systems.Keywords: child protection, family violence, parenting, risk screening, triage.
Procedia PDF Downloads 77881 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 131880 The Effects of Periostin in a Rat Model of Isoproterenol-Mediated Cardiotoxicity
Authors: Mahmut Sozmen, Alparslan Kadir Devrim, Yonca Betil Kabak, Tuba Devrim
Abstract:
Acute myocardial infarction is the leading cause of deaths in the worldwide. Mature cardiomyocytes do not have the ability to regenerate instead fibrous tissue proliferate and granulation tissue to fill out. Periostin is an extracellular matrix protein from fasciclin family and it plays an important role in the cell adhesion, migration, and growth of the organism. Periostin prevents apoptosis while stimulating cardiomyocytes. The main objective of this project is to investigate the effects of the recombinant murine periostin peptide administration for the cardiomyocyte regeneration in a rat model of acute myocardial infarction. The experiment was performed on 84 male rats (6 months old) in 4 group each contains 21 rats. Saline applied subcutaneously (1 ml/kg) two times with 24 hours intervals to the rats in control group (Group 1). Recombinant periostin peptide (1 μg/kg) dissolved in saline applied intraperitoneally in group 2 on 1, 3, 7, 14 and 21. days on same dates in group 4. Isoproterenol dissolved in saline applied intraperitoneally (85mg/kg/day) two times with 24 hours intervals to the groups 3 and 4. Rats in group 4 further received recombinant periostin peptide (1 μg/kg) dissolved in saline intraperitoneally starting one day after the final isoproterenol administration on days 1, 3, 7, 14 and 21. Following the final application of periostin rats continued to feed routinely with pelleted chow and water ad libitum for further seven days. At the end of 7th day rats sacrificed, blood and heart tissue samples collected for the immunohistochemical and biochemical analysis. Angiogenesis in response to tissue damage, is a highly dynamic process regulated by signals from the surrounding extracellular matrix and blood serum. In this project, VEGF, ANGPT, bFGF, TGFβ are the key factors that contribute to cardiomyocyte regeneration were investigated. Additionally, the relationship between mitosis and apoptosis (Bcl-2, Bax, PCNA, Ki-67, Phopho-Histone H3), cell cycle activators and inhibitors (Cyclin D1, D2, A2, Cdc2), the origin of regenerating cells (cKit and CD45) were examined. Present results revealed that periostin stimulated cardiomyocye cell-cycle re-entry in both normal and MCA damaged cardiomyocytes and increased angiogenesis. Thus, periostin contributes to cardiomyocyte regeneration during the healing period following myocardial infarction which provides a better understanding of its role of this mechanism, improving recovery rates and it is expected to contribute the lack of literature on this subject. Acknowledgement: This project was financially supported by Turkish Scientific Research Council- Agriculture, Forestry and Veterinary Research Support Group (TUBİTAK-TOVAG; Project No: 114O734), Ankara, TURKEY.Keywords: cardiotoxicity, immunohistochemistry, isoproterenol, periostin
Procedia PDF Downloads 234879 Phenolic Acids of Plant Origin as Promising Compounds for Elaboration of Antiviral Drugs against Influenza
Authors: Vladimir Berezin, Aizhan Turmagambetova, Andrey Bogoyavlenskiy, Pavel Alexyuk, Madina Alexyuk, Irina Zaitceva, Nadezhda Sokolova
Abstract:
Introduction: Influenza viruses could infect approximately 5% to 10% of the global human population annually, resulting in serious social and economic damage. Vaccination and etiotropic antiviral drugs are used for the prevention and treatment of influenza. Vaccination is important; however, antiviral drugs represent the second line of defense against new emerging influenza virus strains for which vaccines may be unsuccessful. However, the significant drawback of commercial synthetic anti-flu drugs is the appearance of drug-resistant influenza virus strains. Therefore, the search and development of new anti-flu drugs efficient against drug-resistant strains is an important medical problem for today. The aim of this work was a study of four phenolic acids of plant origin (Gallic, Syringic, Vanillic, and Protocatechuic acids) as a possible tool for treatment against influenza virus. Methods: Phenolic acids; gallic, syringic, vanillic, and protocatechuic have been prepared by extraction from plant tissues and purified using high-performance liquid chromatography fractionation. Avian influenza virus, strain A/Tern/South Africa/1/1961 (H5N3) and human epidemic influenza virus, strain A/Almaty/8/98 (H3N2) resistant to commercial anti-flu drugs (Rimantadine, Oseltamivir) were used for testing antiviral activity. Viruses were grown in the allantoic cavity of 10 days old chicken embryos. The chemotherapeutic index (CTI), determined as the ratio of an average toxic concentration of the tested compound (TC₅₀) to the average effective virus-inhibition concentration (EC₅₀), has been used as a criteria of specific antiviral action. Results: The results of study have shown that the structure of phenolic acids significantly affected their ability to suppress the reproduction of tested influenza virus strains. The highest antiviral activity among tested phenolic acids was detected for gallic acid, which contains three hydroxyl groups in the molecule at C3, C4, and C5 positions. Antiviral activity of gallic acid against A/H5N3 and A/H3N2 influenza virus strains was higher than antiviral activity of Oseltamivir and Rimantadine. gallic acid inhibited almost 100% of the infection activity of both tested viruses. Protocatechuic acid, which possesses 2 hydroxyl groups (C3 and C4) have shown weaker antiviral activity in comparison with gallic acid and inhibited less than 10% of virus infection activity. Syringic acid, which contains two hydroxyl groups (C3 and C5), was able to suppress up to 12% of infection activity. Substitution of two hydroxyl groups by methoxy groups resulted in the complete loss of antiviral activity. Vanillic acid, which is different from protocatechuic acid by replacing of C3 hydroxyl group to methoxy group, was able to suppress about 30% of infection activity of tested influenza viruses. Conclusion: For pronounced antiviral activity, the molecular of phenolic acid must have at least two hydroxyl groups. Replacement of hydroxyl groups to methoxy group leads to a reduction of antiviral properties. Gallic acid demonstrated high antiviral activity against influenza viruses, including Rimantadine and Oseltamivir resistant strains, and could be used as a potential candidate for the development of antiviral drug against influenza virus.Keywords: antiviral activity, influenza virus, drug resistance, phenolic acids
Procedia PDF Downloads 141878 Reactivities of Turkish Lignites during Oxygen Enriched Combustion
Authors: Ozlem Uguz, Ali Demirci, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Lignitic coal holds its position as Turkey’s most important indigenous energy source to generate energy in thermal power plants. Hence, efficient and environmental-friendly use of lignite in electricity generation is of great importance. Thus, clean coal technologies have been planned to mitigate emissions and provide more efficient burning in power plants. In this context, oxygen enriched combustion (oxy-combustion) is regarded as one of the clean coal technologies, which based on burning with oxygen concentrations higher than that in air. As it is known that the most of the Turkish coals are low rank with high mineral matter content, unburnt carbon trapped in ash is, unfortunately, high, and it leads significant losses in the overall efficiencies of the thermal plants. Besides, the necessity of burning huge amounts of these low calorific value lignites to get the desired amount of energy also results in the formation of large amounts of ash that is rich in unburnt carbon. Oxygen enriched combustion technology enables to increase the burning efficiency through the complete burning of almost all of the carbon content of the fuel. This also contributes to the protection of air quality and emission levels drop reasonably. The aim of this study is to investigate the unburnt carbon content and the burning reactivities of several different lignite samples under oxygen enriched conditions. For this reason, the combined effects of temperature and oxygen/nitrogen ratios in the burning atmosphere were investigated and interpreted. To do this, Turkish lignite samples from Adıyaman-Gölbaşı and Kütahya-Tunçbilek regions were characterized first by proximate and ultimate analyses and the burning profiles were derived using DTA (Differential Thermal Analysis) curves. Then, these lignites were subjected to slow burning process in a horizontal tube furnace at different temperatures (200ºC, 400ºC, 600ºC for Adıyaman-Gölbaşı lignite and 200ºC, 450ºC, 800ºC for Kütahya-Tunçbilek lignite) under atmospheres having O₂+N₂ proportions of 21%O₂+79%N₂, 30%O₂+70%N₂, 40%O₂+60%N₂, and 50%O₂+50%N₂. These burning temperatures were specified based on the burning profiles derived from the DTA curves. The residues obtained from these burning tests were also analyzed by proximate and ultimate analyses to detect the unburnt carbon content along with the unused energy potential. Reactivity of these lignites was calculated using several methodologies. Burning yield under air condition (21%O₂+79%N₂) was used a benchmark value to compare the effectiveness of oxygen enriched conditions. It was concluded that oxygen enriched combustion method enhanced the combustion efficiency and lowered the unburnt carbon content of ash. Combustion of low-rank coals under oxygen enriched conditions was found to be a promising way to improve the efficiency of the lignite-firing energy systems. However, cost-benefit analysis should be considered for a better justification of this method since the use of more oxygen brings an unignorable additional cost.Keywords: coal, energy, oxygen enriched combustion, reactivity
Procedia PDF Downloads 274877 Tuberculosis (TB) and Lung Cancer
Authors: Asghar Arif
Abstract:
Lung cancer has been recognized as one of the greatest common cancers, causing the annual mortality rate of about 1.2 million people in the world. Lung cancer is the most prevalent cancer in men and the third-most common cancer among women (after breast and digestive cancers).Recent evidences have shown the inflammatory process as one of the potential factors of cancer. Tuberculosis (TB), pneumonia, and chronic bronchitis are among the most important inflammation-inducing factors in the lungs, among which TB has a more profound role in the emergence of cancer.TB is one of the important mortality factors throughout the world, and 205,000 death cases are reported annually due to this disease. Chronic inflammation and fibrosis due to TB can induce genetic mutation and alternations. Parenchyma tissue of lung is involved in both diseases of TB and lung cancer, and continuous cough in lung cancer, morphological vascular variations, lymphocytosis processes, and generation of immune system mediators such as interleukins, are all among the factors leading to the hypothesis regarding the role of TB in lung cancer Some reports have shown that the induction of necrosis and apoptosis or TB reactivation, especially in patients with immune-deficiency, may result in increasing IL-17 and TNF_α, which will either decrease P53 activity or increase the expression of Bcl-2, decrease Bax-T, and cause the inhibition of caspase-3 expression due to decreasing the expression of mitochondria cytochrome oxidase. It has been also indicated that following the injection of BCG vaccine, the host immune system will be reinforced, and in particular, the rates of gamma interferon, nitric oxide, and interleukin-2 are increased. Therefore, CD4 + lymphocyte function will be improved, and the person will be immune against cancer.Numerous prospective studies have so far been conducted on the role of TB in lung cancer, and it seems that this disease is effective in that particular cancer.One of the main challenges of lung cancer is its correct and timely diagnosis. Unfortunately, clinical symptoms (such as continuous cough, hemoptysis, weight loss, fever, chest pain, dyspnea, and loss of appetite) and radiological images are similar in TB and lung cancer. Therefore, anti-TB drugs are routinely prescribed for the patients in the countries with high prevalence of TB, like Pakistan. Regarding the similarity in clinical symptoms and radiological findings of lung cancer, proper diagnosis is necessary for TB and respiratory infections due to nontuberculousmycobacteria (NTM). Some of the drug resistive TB cases are, in fact, lung cancer or NTM lung infections. Acid-fast staining and histological study of phlegm and bronchial washing, culturing and polymerase chain reaction TB are among the most important solutions for differential diagnosis of these diseases. Briefly, it is assumed that TB is one of the risk factors for cancer. Numerous studies have been conducted in this regard throughout the world, and it has been observed that there is a significant relationship between previous TB infection and lung cancer. However, to prove this hypothesis, further and more extensive studies are required. In addition, as the clinical symptoms and radiological findings of TB, lung cancer, and non-TB mycobacteria lung infections are similar, they can be misdiagnosed as TB.Keywords: TB and lung cancer, TB people, TB servivers, TB and HIV aids
Procedia PDF Downloads 73876 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts
Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris
Abstract:
The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides
Procedia PDF Downloads 280875 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 188874 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction
Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca
Abstract:
Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes
Procedia PDF Downloads 119873 Sculpted Forms and Sensitive Spaces: Walking through the Underground in Naples
Authors: Chiara Barone
Abstract:
In Naples, the visible architecture is only what emerges from the underground. Caves and tunnels cross it in every direction, intertwining with each other. They are not natural caves but spaces built by removing what is superfluous in order to dig a form out of the material. Architects, as sculptors of space, do not determine the exterior, what surrounds the volume and in which the forms live, but an interior underground space, perceptive and sensitive, able to generate new emotions each time. It is an intracorporeal architecture linked to the body, not in its external relationships, but rather with what happens inside. The proposed aims to reflect on the design of underground spaces in the Neapolitan city. The idea is to intend the underground as a spectacular museum of the city, an opportunity to learn in situ the history of the place along an unpredictable itinerary that crosses the caves and, in certain points, emerges, escaping from the world of shadows. Starting form the analysis and the study of the many overlapping elements, the archaeological one, the geological layer and the contemporary city above, it is possible to develop realistic alternatives for underground itineraries. The objective is to define minor paths to ensure the continuity between the touristic flows and entire underground segments already investigated but now disconnected: open-air paths, which abyss in the earth, retracing historical and preserved fragments. The visitor, in this way, passes from real spaces to sensitive spaces, in which the imaginary replaces the real experience, running towards exciting and secret knowledge. To safeguard the complex framework of the historical-artistic values, it is essential to use a multidisciplinary methodology based on a global approach. Moreover, it is essential to refer to similar design projects for the archaeological underground, capable of guide action strategies, looking at similar conditions in other cities, where the project has led to an enhancement of the heritage in the city. The research limits the field of investigation, by choosing the historic center of Naples, applying bibliographic and theoretical research to a real place. First of all, it’s necessary to deepen the places’ knowledge understanding the potentialities of the project as a link between what is below and what is above. Starting from a scientific approach, in which theory and practice are constantly intertwined through the architectural project, the major contribution is to provide possible alternative configurations for the underground space and its relationship with the city above, understanding how the condition of transition, as passage between the below and the above becomes structuring in the design process. Starting from the consideration of the underground as both a real physical place and a sensitive place, which engages the memory, imagination, and sensitivity of a man, the research aims at identifying possible configurations and actions useful for future urban programs to make the underground a central part of the lived city, again.Keywords: underground paths, invisible ruins, imaginary, sculpted forms, sensitive spaces, Naples
Procedia PDF Downloads 103872 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 272871 Dynamic EEG Desynchronization in Response to Vicarious Pain
Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy
Abstract:
The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition
Procedia PDF Downloads 283