Search results for: genetic testing
291 Robotics Education Continuity from Diaper Age to Doctorate
Authors: Vesa Salminen, Esa Santakallio, Heikki Ruohomaa
Abstract:
Introduction: The city of Riihimäki has decided robotics on well-being, service and industry as the main focus area on their ecosystem strategy. Robotics is going to be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, also education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The objective of this activity has been to develop education continuity from diaper age to doctorate. The main target of the development activity is to create a unique robotics study entity that enables ongoing robotics studies from preprimary education to university. The aim is also to attract students internationally and supply a skilled workforce to the private sector, capable of the challenges of the future. Methodology: Education instances (high school, second grade, Universities on all levels) in a large area of Tavastia Province have gradually directed their education programs to support this goal. On the other hand, applied research projects have been created to make proof of concept- phases on areal real environment field labs to test technology opportunities and digitalization to change business processes by applying robotic solutions. Customer-oriented applied research projects offer for students in robotics education learning environments to learn new knowledge and content. That is also a learning environment for education programs to adapt and co-evolution. New content and problem-based learning are used in future education modules. Major findings: Joint robotics education entity is being developed in cooperation with the city of Riihimäki (primary education), Syria Education (secondary education) and HAMK (bachelor and master education). The education modules have been developed to enable smooth transitioning from one institute to another. This article is introduced a case study of the change of education of wellbeing education because of digitalization and robotics. Riihimäki's Elderly citizen's service house, Riihikoti, has been working as a field lab for proof-of-concept phases on testing technology opportunities. According to successful case studies also education programs on various levels of education have been changing. Riihikoti has been developed as a physical learning environment for home care and robotics, investigating and developing a variety of digital devices and service opportunities and experimenting and learn the use of equipment. The environment enables the co-development of digital service capabilities in the authentic environment for all interested groups in transdisciplinary cooperation.Keywords: ecosystem strategy, digitalization and robotics, education continuity, learning environment, transdisciplinary co-operation
Procedia PDF Downloads 176290 Female Autism Spectrum Disorder and Understanding Rigid Repetitive Behaviors
Authors: Erin Micali, Katerina Tolstikova, Cheryl Maykel, Elizabeth Harwood
Abstract:
Female ASD is seldomly studied separately from males. Further, females with ASD are disproportionately underrepresented in the research at a rate of 3:1 (male to female). As such, much of the current understanding about female rigid repetitive behaviors (RRBs) stems from research’s understanding of male RRBs. This can be detrimental to understanding female ASD because this understanding of female RRBs largely discounts female camouflaging and the possibility that females present their autistic symptoms differently. Current literature suggests that females with ASD engage in fewer RRBs than males with ASD and when females do engage in RRBs, they are likely to engage in more subtle, less overt obsessions and repetitive behaviors than males. Method: The current study utilized a mixed methods research design to identify the type and frequency of RRBs that females with ASD engaged in by using a cross-sectional design. The researcher recruited only females to be part of the present study with the criteria they be at least age six and not have co-occurring cognitive impairment. Results: The researcher collected previous testing data (Autism Diagnostic Interview-Revised (ADI-R), Child or Adolescent/Adult Sensory Profile-2, Autism/ Empathy Quotient, Yale Brown Obsessive Compulsive Checklist, Rigid Repetitive Behavior Checklist (evaluator created list), and demographic questionnaire) from 25 total participants. The participants ages ranged from six to 52. The participants were 96% Caucasion and 4% Latin American. Qualitative analysis found the current participant pool engaged in six RRB themes including repetitive behaviors, socially restrictive behaviors, repetitive speech, difficulty with transition, obsessive behaviors, and restricted interests. The current dataset engaged in socially restrictive behaviors and restrictive interests most frequently. Within the main themes 40 subthemes were isolated, defined, and analyzed. Further, preliminary quantitative analysis was run to determine if age impacted camouflaging behaviors and overall presentation of RRBs. Within this dataset this was not founded. Further qualitative data will be run to determine if this dataset engaged in more overt or subtle RRBs to confirm or rebuff previous research. The researcher intends to run SPSS analysis to determine if there was statistical difference between each RRB theme and overall presentation. Secondly, each participant will be analyzed for presentation of RRB, age, and previous diagnoses. Conclusion: The present study aimed to assist in diagnostic clarity. This was achieved by collecting data from a female only participant pool across the lifespan. Current data aided in clarity of the type of RRBs engage in. A limited sample size was a barrier in this study.Keywords: autism spectrum disorder, camouflaging, rigid repetitive behaviors, gender disparity
Procedia PDF Downloads 142289 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium
Authors: Tahisa M. Pedroso, Hérida R. N. Salgado
Abstract:
The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.Keywords: ertapenem sodium, turbidimetric assay, quality control, validation
Procedia PDF Downloads 393288 Towards an Environmental Knowledge System in Water Management
Authors: Mareike Dornhoefer, Madjid Fathi
Abstract:
Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management
Procedia PDF Downloads 220287 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends
Authors: Reena Aggarwal, Neha Kestwal
Abstract:
The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.Keywords: Himalayan nettle, sustainable, shrinkage, blending
Procedia PDF Downloads 240286 Alternative Fuel Production from Sewage Sludge
Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova
Abstract:
The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge
Procedia PDF Downloads 135285 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 165284 Psoriasis Diagnostic Test Development: Exploratory Study
Authors: Salam N. Abdo, Orien L. Tulp, George P. Einstein
Abstract:
The purpose of this exploratory study was to gather the insights into psoriasis etiology, treatment, and patient experience, for developing psoriasis and psoriatic arthritis diagnostic test. Data collection methods consisted of a comprehensive meta-analysis of relevant studies and psoriasis patient survey. Established meta-analysis guidelines were used for the selection and qualitative comparative analysis of psoriasis and psoriatic arthritis research studies. Only studies that clearly discussed psoriasis etiology, treatment, and patient experience were reviewed and analyzed, to establish a qualitative data base for the study. Using the insights gained from meta-analysis, an existing psoriasis patient survey was modified and administered to collect additional data as well as triangulate the results. The hypothesis is that specific types of psoriatic disease have specific etiology and pathophysiologic pattern. The following etiology categories were identified: bacterial, environmental/microbial, genetic, immune, infectious, trauma/stress, and viral. Additional results, obtained from meta-analysis and confirmed by patient survey, were the common age of onset (early to mid-20s) and type of psoriasis (plaque; mild; symmetrical; scalp, chest, and extremities, specifically elbows and knees). Almost 70% of patients reported no prescription drug use due to severe side effects and prohibitive cost. These results will guide the development of psoriasis and psoriatic arthritis diagnostic test. The significant number of medical publications classified psoriatic arthritis disease as inflammatory of an unknown etiology. Thus numerous meta-analyses struggle to report any meaningful conclusions since no definitive results have been reported to date. Therefore, return to the basics is an essential step to any future meaningful results. To date, medical literature supports the fact that psoriatic disease in its current classification could be misidentifying subcategories, which in turn hinders the success of studies conducted to date. Moreover, there has been an enormous commercial support to pursue various immune-modulation therapies, thus following a narrow hypothesis/mechanism of action that is yet to yield resolution of disease state. Recurrence and complications may be considered unacceptable in a significant number of these studies. The aim of the ongoing study is to focus on a narrow subgroup of patient population, as identified by this exploratory study via meta-analysis and patient survey, and conduct an exhaustive work up, aiming at mechanism of action and causality before proposing a cure or therapeutic modality. Remission in psoriasis has been achieved and documented in medical literature, such as immune-modulation, phototherapy, various over-the-counter agents, including salts and tar. However, there is no psoriasis and psoriatic arthritis diagnostic test to date, to guide the diagnosis and treatment of this debilitating and, thus far, incurable disease. Because psoriasis affects approximately 2% of population, the results of this study may affect the treatment and improve the quality of life of a significant number of psoriasis patients, potentially millions of patients in the United States alone and many more millions worldwide.Keywords: biologics, early diagnosis, etiology, immune disease, immune modulation therapy, inflammation skin disorder, phototherapy, plaque psoriasis, psoriasis, psoriasis classification, psoriasis disease marker, psoriasis diagnostic test, psoriasis marker, psoriasis mechanism of action, psoriasis treatment, psoriatic arthritis, psoriatic disease, psoriatic disease marker, psoriatic patient experience, psoriatic patient quality of life, remission, salt therapy, targeted immune therapy
Procedia PDF Downloads 118283 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks
Authors: Shaleeza Sohail
Abstract:
Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system
Procedia PDF Downloads 187282 Model Tests on Geogrid-Reinforced Sand-Filled Embankments with a Cover Layer under Cyclic Loading
Authors: Ma Yuan, Zhang Mengxi, Akbar Javadi, Chen Longqing
Abstract:
The structure of sand-filled embankment with cover layer is treated with tipping clay modified with lime on the outside of the packing, and the geotextile is placed between the stuffing and the clay. The packing is usually river sand, and the improved clay protects the sand core against rainwater erosion. The sand-filled embankment with cover layer has practical problems such as high filling embankment, construction restriction, and steep slope. The reinforcement can be applied to the sand-filled embankment with cover layer to solve the complicated problems such as irregular settlement caused by poor stability of the embankment. At present, the research on the sand-filled embankment with cover layer mainly focuses on the sand properties, construction technology, and slope stability, and there are few studies in the experimental field, the deformation characteristics and stability of reinforced sand-filled embankment need further study. In addition, experimental research is relatively rare when the cyclic load is considered in tests. A subgrade structure of geogrid-reinforced sand-filled embankment with cover layer was proposed. The mechanical characteristics, the deformation properties, reinforced behavior and the ultimate bearing capacity of the embankment structure under cyclic loading were studied. For this structure, the geogrids in the sand and the tipping soil are through the geotextile which is arranged in sections continuously so that the geogrids can cross horizontally. Then, the Unsaturated/saturated Soil Triaxial Test System of Geotechnical Consulting and Testing Systems (GCTS), USA was modified to form the loading device of this test, and strain collector was used to measuring deformation and earth pressure of the embankment. A series of cyclic loading model tests were conducted on the geogrid-reinforced sand-filled embankment with a cover layer under a different number of reinforcement layers, the length of reinforcement and thickness of the cover layer. The settlement of the embankment, the normal cumulative deformation of the slope and the earth pressure were studied under different conditions. Besides cyclic loading model tests, model experiments of embankment subjected cyclic-static loading was carried out to analyze ultimate bearing capacity with different loading. The experiment results showed that the vertical cumulative settlement under long-term cyclic loading increases with the decrease of the number of reinforcement layers, length of the reinforcement arrangement and thickness of the tipping soil. Meanwhile, these three factors also have an influence on the decrease of the normal deformation of the embankment slope. The earth pressure around the loading point is significantly affected by putting geogrid in a model embankment. After cyclic loading, the decline of ultimate bearing capacity of the reinforced embankment can be effectively reduced, which is contrary to the unreinforced embankment.Keywords: cyclic load; geogrid; reinforcement behavior; cumulative deformation; earth pressure
Procedia PDF Downloads 122281 Mechanical Testing of Composite Materials for Monocoque Design in Formula Student Car
Authors: Erik Vassøy Olsen, Hirpa G. Lemu
Abstract:
Inspired by the Formula-1 competition, IMechE (Institute of Mechanical Engineers) and Formula SAE (Society of Mechanical Engineers) organize annual competitions for University and College students worldwide to compete with a single-seat race car they have designed and built. The design of the chassis or the frame is a key component of the competition because the weight and stiffness properties are directly related with the performance of the car and the safety of the driver. In addition, a reduced weight of the chassis has a direct influence on the design of other components in the car. Among others, it improves the power to weight ratio and the aerodynamic performance. As the power output of the engine or the battery installed in the car is limited to 80 kW, increasing the power to weight ratio demands reduction of the weight of the chassis, which represents the major part of the weight of the car. In order to reduce the weight of the car, ION Racing team from the University of Stavanger, Norway, opted for a monocoque design. To ensure fulfilment of the above-mentioned requirements of the chassis, the monocoque design should provide sufficient torsional stiffness and absorb the impact energy in case of a possible collision. The study reported in this article is based on the requirements for Formula Student competition. As part of this study, diverse mechanical tests were conducted to determine the mechanical properties and performances of the monocoque design. Upon a comprehensive theoretical study of the mechanical properties of sandwich composite materials and the requirements of monocoque design in the competition rules, diverse tests were conducted including 3-point bending test, perimeter shear test and test for absorbed energy. The test panels were homemade and prepared with an equivalent size of the side impact zone of the monocoque, i.e. 275 mm x 500 mm so that the obtained results from the tests can be representative. Different layups of the test panels with identical core material and the same number of layers of carbon fibre were tested and compared. Influence of the core material thickness was also studied. Furthermore, analytical calculations and numerical analysis were conducted to check compliance to the stated rules for Structural Equivalency with steel grade SAE/AISI 1010. The test results were also compared with calculated results with respect to bending and torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition and strength of the composite material selected for the monocoque design has equivalent structural properties as a welded frame and thus comply with the competition requirements. The developed analytical calculation algorithms and relations will be useful for future monocoque designs with different lay-ups and compositions.Keywords: composite material, Formula student, ION racing, monocoque design, structural equivalence
Procedia PDF Downloads 501280 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients
Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa
Abstract:
Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.Keywords: autism, epigenetics, LSD1, personalized medicine
Procedia PDF Downloads 164279 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis
Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan
Abstract:
The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level
Procedia PDF Downloads 211278 ChatGPT Performs at the Level of a Third-Year Orthopaedic Surgery Resident on the Orthopaedic In-training Examination
Authors: Diane Ghanem, Oscar Covarrubias, Michael Raad, Dawn LaPorte, Babar Shafiq
Abstract:
Introduction: Standardized exams have long been considered a cornerstone in measuring cognitive competency and academic achievement. Their fixed nature and predetermined scoring methods offer a consistent yardstick for gauging intellectual acumen across diverse demographics. Consequently, the performance of artificial intelligence (AI) in this context presents a rich, yet unexplored terrain for quantifying AI's understanding of complex cognitive tasks and simulating human-like problem-solving skills. Publicly available AI language models such as ChatGPT have demonstrated utility in text generation and even problem-solving when provided with clear instructions. Amidst this transformative shift, the aim of this study is to assess ChatGPT’s performance on the orthopaedic surgery in-training examination (OITE). Methods: All 213 OITE 2021 web-based questions were retrieved from the AAOS-ResStudy website. Two independent reviewers copied and pasted the questions and response options into ChatGPT Plus (version 4.0) and recorded the generated answers. All media-containing questions were flagged and carefully examined. Twelve OITE media-containing questions that relied purely on images (clinical pictures, radiographs, MRIs, CT scans) and could not be rationalized from the clinical presentation were excluded. Cohen’s Kappa coefficient was used to examine the agreement of ChatGPT-generated responses between reviewers. Descriptive statistics were used to summarize the performance (% correct) of ChatGPT Plus. The 2021 norm table was used to compare ChatGPT Plus’ performance on the OITE to national orthopaedic surgery residents in that same year. Results: A total of 201 were evaluated by ChatGPT Plus. Excellent agreement was observed between raters for the 201 ChatGPT-generated responses, with a Cohen’s Kappa coefficient of 0.947. 45.8% (92/201) were media-containing questions. ChatGPT had an average overall score of 61.2% (123/201). Its score was 64.2% (70/109) on non-media questions. When compared to the performance of all national orthopaedic surgery residents in 2021, ChatGPT Plus performed at the level of an average PGY3. Discussion: ChatGPT Plus is able to pass the OITE with a satisfactory overall score of 61.2%, ranking at the level of third-year orthopaedic surgery residents. More importantly, it provided logical reasoning and justifications that may help residents grasp evidence-based information and improve their understanding of OITE cases and general orthopaedic principles. With further improvements, AI language models, such as ChatGPT, may become valuable interactive learning tools in resident education, although further studies are still needed to examine their efficacy and impact on long-term learning and OITE/ABOS performance.Keywords: artificial intelligence, ChatGPT, orthopaedic in-training examination, OITE, orthopedic surgery, standardized testing
Procedia PDF Downloads 89277 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 239276 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology
Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov
Abstract:
The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle
Procedia PDF Downloads 259275 The Photovoltaic Panel at End of Life: Experimental Study of Metals Release
Authors: M. Tammaro, S. Manzo, J. Rimauro, A. Salluzzo, S. Schiavo
Abstract:
The solar photovoltaic (PV) modules are considered to have a negligible environmental impact compared to the fossil energy. Therefore also the waste management and the corresponding potential environmental hazard needs to be considered. The case of the photovoltaic panel is unique because the time lag from the manufacturing to the decommissioning as waste usually takes 25-30 years. Then the environmental hazard associated with end life of PV panels has been largely related to their metal contents. The principal concern regards the presence of heavy metals as Cd in thin film (TF) modules or Pb and Cr in crystalline silicon (c-Si) panels. At the end of life of PV panels, these dangerous substances could be released in the environment, if special requirements for their disposal are not adopted. Nevertheless, in literature, only a few experimental study about metal emissions from silicon crystalline/thin film panels and the corresponding environmental effect are present. As part of a study funded by the Italian national consortium for the waste collection and recycling (COBAT), the present work was aimed to analyze experimentally the potential release into the environment of hazardous elements, particularly metals, from PV waste. In this paper, for the first time, eighteen releasable metals a large number of photovoltaic panels, by c-Si and TF, manufactured in the last 30 years, together with the environmental effects by a battery of ecotoxicological tests, were investigated. Leaching tests are conducted on the crushed samples of PV module. The test is conducted according to Italian and European Standard procedure for hazard assessment of the granular waste and of the sludge. The sample material is shaken for 24 hours in HDPE bottles with an overhead mixer Rotax 6.8 VELP at indoor temperature and using pure water (18 MΩ resistivity) as leaching solution. The liquid-to-solid ratio was 10 (L/S=10, i.e. 10 liters of water per kg of solid). The ecotoxicological tests were performed in the subsequent 24 hours. A battery of toxicity test with bacteria (Vibrio fisheri), algae (Pseudochirneriella subcapitata) and crustacea (Daphnia magna) was carried out on PV panel leachates obtained as previously described and immediately stored in dark and at 4°C until testing (in the next 24 hours). For understand the actual pollution load, a comparison with the current European and Italian benchmark limits was performed. The trend of leachable metal amount from panels in relation to manufacturing years was then highlighted in order to assess the environmental sustainability of PV technology over time. The experimental results were very heterogeneous and show that the photovoltaic panels could represent an environmental hazard. The experimental results showed that the amounts of some hazardous metals (Pb, Cr, Cd, Ni), for c-Si and TF, exceed the law limits and they are a clear indication of the potential environmental risk of photovoltaic panels "as a waste" without a proper management.Keywords: photovoltaic panel, environment, ecotoxicity, metals emission
Procedia PDF Downloads 260274 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement
Authors: Brittany Richardson, Ying Wang
Abstract:
For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments
Procedia PDF Downloads 134273 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 309272 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 132271 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108270 Microbial Contamination of Cell Phones of Health Care Workers: Case Study in Mampong Municipal Government Hospital, Ghana
Authors: Francis Gyapong, Denis Yar
Abstract:
The use of cell phones has become an indispensable tool in the hospital's settings. Cell phones are used in hospitals without restrictions regardless of their unknown microbial load. However, the indiscriminate use of mobile devices, especially at health facilities, can act as a vehicle for transmitting pathogenic bacteria and other microorganisms. These potential pathogens become exogenous sources of infection for the patients and are also a potential health hazard for self and as well as family members. These are a growing problem in many health care institutions. Innovations in mobile communication have led to better patient care in diabetes, asthma, and increased in vaccine uptake via SMS. Notwithstanding, the use of cell phones can be a great potential source for nosocomial infections. Many studies reported heavy microbial contamination of cell phones among healthcare workers and communities. However, limited studies have been reported in our region on bacterial contamination on cell phones among healthcare workers. This study assessed microbial contamination of cell phones of health care workers (HCWs) at the Mampong Municipal Government Hospital (MMGH), Ghana. A cross-sectional design was used to characterize bacterial microflora on cell phones of HCWs at the MMGH. A total of thirty-five (35) swab samples of cell phones of HCWs at the Laboratory, Dental Unit, Children’s Ward, Theater and Male ward were randomly collected for laboratory examinations. A suspension of the swab samples was each streak on blood and MacConkey agar and incubated at 37℃ for 48 hours. Bacterial isolates were identified using appropriate laboratory and biochemical tests. Kirby-Bauer disc diffusion method was used to determine the antimicrobial sensitivity tests of the isolates. Data analysis was performed using SPSS version 16. All mobile phones sampled were contaminated with one or more bacterial isolates. Cell phones from the Male ward, Dental Unit, Laboratory, Theatre and Children’s ward had at least three different bacterial isolates; 85.7%, 71.4%, 57.1% and 28.6% for both Theater and Children’s ward respectively. Bacterial contaminants identified were Staphylococcus epidermidis (37%), Staphylococcus aureus (26%), E. coli (20%), Bacillus spp. (11%) and Klebsiella spp. (6 %). Except for the Children ward, E. coli was isolated at all study sites and predominant (42.9%) at the Dental Unit while Klebsiella spp. (28.6%) was only isolated at the Children’s ward. Antibiotic sensitivity testing of Staphylococcus aureus indicated that they were highly sensitive to cephalexin (89%) tetracycline (80%), gentamycin (75%), lincomycin (70%), ciprofloxacin (67%) and highly resistant to ampicillin (75%). Some of these bacteria isolated are potential pathogens and their presence on cell phones of HCWs could be transmitted to patients and their families. Hence strict hand washing before and after every contact with patient and phone be enforced to reduce the risk of nosocomial infections.Keywords: mobile phones, bacterial contamination, patients, MMGH
Procedia PDF Downloads 103269 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf
Authors: Simran Gill, Samuel Bartels
Abstract:
Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.Keywords: broadleaf weeds, nitrogen, iron, turfgrass
Procedia PDF Downloads 72268 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains
Authors: Neha Farid
Abstract:
Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins
Procedia PDF Downloads 71267 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 142266 Technology Management for Early Stage Technologies
Authors: Ming Zhou, Taeho Park
Abstract:
Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.Keywords: technology management, early stage technology, patent, decision
Procedia PDF Downloads 342265 Possibilities of Psychodiagnostics in the Context of Highly Challenging Situations in Military Leadership
Authors: Markéta Chmelíková, David Ullrich, Iva Burešová
Abstract:
The paper maps the possibilities and limits of diagnosing selected personality and performance characteristics of military leadership and psychology students in the context of coping with challenging situations. Individuals vary greatly inter-individually in their ability to effectively manage extreme situations, yet existing diagnostic tools are often criticized mainly for their low predictive power. Nowadays, every modern army focuses primarily on the systematic minimization of potential risks, including the prediction of desirable forms of behavior and the performance of military commanders. The context of military leadership is well known for its life-threatening nature. Therefore, it is crucial to research stress load in the specific context of military leadership for the purpose of possible anticipation of human failure in managing extreme situations of military leadership. The aim of the submitted pilot study, using an experiment of 24 hours duration, is to verify the possibilities of a specific combination of psychodiagnostic to predict people who possess suitable equipment for coping with increased stress load. In our pilot study, we conducted an experiment of 24 hours duration with an experimental group (N=13) in the bomb shelter and a control group (N=11) in a classroom. Both groups were represented by military leadership students (N=11) and psychology students (N=13). Both groups were equalized in terms of study type and gender. Participants were administered the following test battery of personality characteristics: Big Five Inventory 2 (BFI-2), Short Dark Triad (SD-3), Emotion Regulation Questionnaire (ERQ), Fatigue Severity Scale (FSS), and Impulsive Behavior Scale (UPPS-P). This test battery was administered only once at the beginning of the experiment. Along with this, they were administered a test battery consisting of the Test of Attention (d2) and the Bourdon test four times overall with 6 hours ranges. To better simulate an extreme situation – we tried to induce sleep deprivation - participants were required to try not to fall asleep throughout the experiment. Despite the assumption that a stay in an underground bomb shelter will manifest in impaired cognitive performance, this expectation has been significantly confirmed in only one measurement, which can be interpreted as marginal in the context of multiple testing. This finding is a fundamental insight into the issue of stress management in extreme situations, which is crucial for effective military leadership. The results suggest that a 24-hour stay in a shelter, together with sleep deprivation, does not seem to simulate sufficient stress for an individual, which would be reflected in the level of cognitive performance. In the context of these findings, it would be interesting in future to extend the diagnostic battery with physiological indicators of stress, such as: heart rate, stress score, physical stress, mental stress ect.Keywords: bomb shelter, extreme situation, military leadership, psychodiagnostic
Procedia PDF Downloads 91264 Utilization of Fly Ash Amended Sewage Sludge as Sustainable Building Material
Authors: Kaling Taki, Rohit Gahlot, Manish Kumar
Abstract:
Disposal of Sewage Sludge (SS) is a big issue especially in developing nation like India, where there is no control in the dynamicity of SS produced. The present research work demonstrates the potential application of SS amended with varying percentage (0-100%) of Fly Ash (FA) for brick manufacturing as an alternative of SS management. SS samples were collected from Jaspur sewage treatment plant (Ahmedabad, India) and subjected to different preconditioning treatments: (i) atmospheric drying (ii) pulverization (iii) heat treatment in oven (110°C, moisture removal) and muffle furnace (440°C, organic content removal). Geotechnical parameters of the SS were obtained as liquid limit (52%), plastic limit (24%), shrinkage limit (10%), plasticity index (28%), differential free swell index (DFSI, 47%), silt (68%), clay (27%), organic content (5%), optimum moisture content (OMC, 20%), maximum dry density (MDD, 1.55gm/cc), specific gravity (2.66), swell pressure (57kPa) and unconfined compressive strength (UCS, 207kPa). For FA liquid limit, plastic limit and specific gravity was 44%, 0% and 2.2 respectively. Initially, for brick casting pulverized SS sample was heat treated in a muffle furnace around 440℃ (5 hours) for removal of organic matter. Later, mixing of SS, FA and water by weight ratio was done at OMC. 7*7*7 cm3 sample mold was used for casting bricks at MDD. Brick samples were then first dried in room temperature for 24 hours, then in oven at 100℃ (24 hours) and finally firing in muffle furnace for 1000℃ (10 hours). The fired brick samples were then cured for 3 days according to Indian Standards (IS) common burnt clay building bricks- specification (5th revision). The Compressive strength of brick samples (0, 10, 20, 30, 40, 50 ,60, 70, 80, 90, 100%) of FA were 0.45, 0.76, 1.89, 1.83, 4.02, 3.74, 3.42, 3.19, 2.87, 0.78 and 4.95MPa when evaluated through compressive testing machine (CTM) for a stress rate of 14MPa/min. The highest strength was obtained at 40% FA mixture i.e. 4.02MPa which is much higher than the pure SS brick sample. According to IS 1077: 1992 this combination gives strength more than 3.5 MPa and can be utilized as common building bricks. The loss in weight after firing was much higher than the oven treatment, this might be due to degradation temperature higher than 100℃. The thermal conductivity of the fired brick was obtained as 0.44Wm-1K-1, indicating better insulation properties than other reported studies. TCLP (Toxicity characteristic leaching procedure) test of Cr, Cu, Co, Fe and Ni in raw SS was found as 69, 70, 21, 39502 and 47 mg/kg. The study positively concludes that SS and FA at optimum ratio can be utilized as common building bricks such as partitioning wall and other small strength requirement works. The uniqueness of the work is it emphasizes on utilization of FA for stabilizing SS as construction material as a replacement of natural clay as reported in existing studies.Keywords: Compressive strength, Curing, Fly Ash, Sewage Sludge.
Procedia PDF Downloads 111263 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK
Authors: Aisha Ijaz
Abstract:
The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK
Procedia PDF Downloads 56262 Hybrid Living: Emerging Out of the Crises and Divisions
Authors: Yiorgos Hadjichristou
Abstract:
The paper will focus on the hybrid living typologies which are brought about due to the Global Crisis. Mixing of the generations and the groups of people, mingling the functions of living with working and socializing, merging the act of living in synergy with the urban realm and its constituent elements will be the springboard of proposing an essential sustainable housing approach and the respective urban development. The thematic will be based on methodologies developed both on the academic, educational environment including participation of students’ research and on the practical aspect of architecture including case studies executed by the author in the island of Cyprus. Both paths of the research will deal with the explorative understanding of the hybrid ways of living, testing the limits of its autonomy. The evolution of the living typologies into substantial hybrid entities, will deal with the understanding of new ways of living which include among others: re-introduction of natural phenomena, accommodation of the activity of work and services in the living realm, interchange of public and private, injections of communal events into the individual living territories. The issues and the binary questions raised by what is natural and artificial, what is private and what public, what is ephemeral and what permanent and all the in-between conditions are eloquently traced in the everyday life in the island. Additionally, given the situation of Cyprus with the eminent scar of the dividing ‘Green line’ and the waiting of the ‘ghost city’ of Famagusta to be resurrected, the conventional way of understanding the limits and the definitions of the properties is irreversibly shaken. The situation is further aggravated by the unprecedented phenomenon of the crisis on the island. All these observations set the premises of reexamining the urban development and the respective sustainable housing in a synergy where their characteristics start exchanging positions, merge into each other, contemporarily emerge and vanish, changing from permanent to ephemeral. This fluidity of conditions will attempt to render a future of the built- and unbuilt realm where the main focusing point will be redirected to the human and the social. Weather and social ritual scenographies together with ‘spontaneous urban landscapes’ of ‘momentary relationships’ will suggest a recipe for emerging urban environments and sustainable living. Thus, the paper will aim at opening a discourse on the future of the sustainable living merged in a sustainable urban development in relation to the imminent solution of the division of island, where the issue of property became the main obstacle to be overcome. At the same time, it will attempt to link this approach to the global need for a sustainable evolution of the urban and living realms.Keywords: social ritual scenographies, spontaneous urban landscapes, substantial hybrid entities, re-introduction of natural phenomena
Procedia PDF Downloads 263