Search results for: physical biometric
167 Usability Assessment of a Bluetooth-Enabled Resistance Exercise Band among Young Adults
Authors: Lillian M. Seo, Curtis L. Petersen, Ryan J. Halter, David Kotz, John A. Batsis
Abstract:
Background: Resistance-based exercises effectively enhance muscle strength, which is especially important in older populations as it reduces the risk of disability. Our group developed a Bluetooth-enabled handle for resistance exercise bands that wirelessly transmits relative force data through low-energy Bluetooth to a local smartphone or similar device. The system has the potential to measure home-based exercise interventions, allowing health professionals to monitor compliance. Its feasibility has already been demonstrated in both clinical and field-based settings, but it remained unclear whether the system’s usability persisted upon repeated use. The current study sought to assess the usability of this system and its users’ satisfaction with repeated use by deploying the device among younger adults to gather formative information that can ultimately improve the device’s design for older adults. Methods: A usability study was conducted in which 32 participants used the above system. Participants executed 10 repetitions of four commonly performed exercises: bicep flexion, shoulder abduction, elbow extension, and triceps extension. Each completed three exercise sessions, separated by at least 24 hours to minimize muscle fatigue. At its conclusion, subjects completed an adapted version of the usefulness, satisfaction, and ease (USE) questionnaire – assessing the system across four domains: usability, satisfaction, ease of use, and ease of learning. The 20-item questionnaire examined how strongly a participant agrees with positive statements about the device on a seven-point Likert scale, with one representing ‘strongly disagree’ and seven representing ‘strongly agree.’ Participants’ data were aggregated to calculate mean response values for each question and domain, effectively assessing the device’s performance across different facets of the user experience. Summary force data were visualized using a custom web application. Finally, an optional prompt at the end of the questionnaire allowed for written comments and feedback from participants to elicit qualitative indicators of usability. Results: Of the n=32 participants, 13 (41%) were female; their mean age was 32.4 ± 11.8 years, and no participants had a physical impairment. No usability questions received a mean score < 5 of seven. The four domains’ mean scores were: usefulness 5.66 ± 0.35; satisfaction 6.23 ± 0.06; ease of use 6.25 ± 0.43; and ease of learning 6.50 ± 0.19. Representative quotes of the open-ended feedback include: ‘A non-rigid strap-style handle might be useful for some exercises,’ and, ‘Would need different bands for each exercise as they use different muscle groups with different strength levels.’ General impressions were favorable, supporting the expectation that the device would be a useful tool in exercise interventions. Conclusions: A simple usability assessment of a Bluetooth-enabled resistance exercise band supports a consistent and positive user experience among young adults. This study provides adequate formative data, assuring the next steps can be taken to continue testing and development for the target population of older adults.Keywords: Bluetooth, exercise, mobile health, mHealth, usability
Procedia PDF Downloads 117166 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change
Authors: Franciska Weise, Ralph Bruder
Abstract:
Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.Keywords: job rotation, age-related work design, questionnaire, automotive industry
Procedia PDF Downloads 303165 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City
Authors: Jothilakshmy Nagammal
Abstract:
This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.Keywords: context based planning model, form based code, transect, systemic approach
Procedia PDF Downloads 335164 Low- and High-Temperature Methods of CNTs Synthesis for Medicine
Authors: Grzegorz Raniszewski, Zbigniew Kolacinski, Lukasz Szymanski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza
Abstract:
One of the most promising area for carbon nanotubes (CNTs) application is medicine. One of the most devastating diseases is cancer. Carbon nanotubes may be used as carriers of a slowly released drug. It is possible to use of electromagnetic waves to destroy cancer cells by the carbon nanotubes (CNTs). In our research we focused on thermal ablation by ferromagnetic carbon nanotubes (Fe-CNTs). In the cancer cell hyperthermia functionalized carbon nanotubes are exposed to radio frequency electromagnetic field. Properly functionalized Fe-CNTs join the cancer cells. Heat generated in nanoparticles connected to nanotubes warm up nanotubes and then the target tissue. When the temperature in tumor tissue exceeds 316 K the necrosis of cancer cells may be observed. Several techniques can be used for Fe-CNTs synthesis. In our work, we use high-temperature methods where arc-discharge is applied. Low-temperature systems are microwave plasma with assisted chemical vapor deposition (MPCVD) and hybrid physical-chemical vapor deposition (HPCVD). In the arc discharge system, the plasma reactor works with a pressure of He up to 0,5 atm. The electric arc burns between two graphite rods. Vapors of carbon move from the anode, through a short arc column and forms CNTs which can be collected either from the reactor walls or cathode deposit. This method is suitable for the production of multi-wall and single-wall CNTs. A disadvantage of high-temperature methods is a low purification, short length, random size and multi-directional distribution. In MPCVD system plasma is generated in waveguide connected to the microwave generator. Then containing carbon and ferromagnetic elements plasma flux go to the quartz tube. The additional resistance heating can be applied to increase the reaction effectiveness and efficiency. CNTs nucleation occurs on the quartz tube walls. It is also possible to use substrates to improve carbon nanotubes growth. HPCVD system involves both chemical decomposition of carbon containing gases and vaporization of a solid or liquid source of catalyst. In this system, a tube furnace is applied. A mixture of working and carbon-containing gases go through the quartz tube placed inside the furnace. As a catalyst ferrocene vapors can be used. Fe-CNTs may be collected then either from the quartz tube walls or on the substrates. Low-temperature methods are characterized by higher purity product. Moreover, carbon nanotubes from tested CVD systems were partially filled with the iron. Regardless of the method of Fe-CNTs synthesis the final product always needs to be purified for applications in medicine. The simplest method of purification is an oxidation of the amorphous carbon. Carbon nanotubes dedicated for cancer cell thermal ablation need to be additionally treated by acids for defects amplification on the CNTs surface what facilitates biofunctionalization. Application of ferromagnetic nanotubes for cancer treatment is a promising method of fighting with cancer for the next decade. Acknowledgment: The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013Keywords: arc discharge, cancer, carbon nanotubes, CVD, thermal ablation
Procedia PDF Downloads 449163 The Achievements and Challenges of Physics Teachers When Implementing Problem-Based Learning: An Exploratory Study Applied to Rural High Schools
Authors: Osman Ali, Jeanne Kriek
Abstract:
Introduction: The current instructional approach entrenched in memorizing does not assist conceptual understanding in science. Instructional approaches that encourage research, investigation, and experimentation, which depict how scientists work, should be encouraged. One such teaching strategy is problem-based learning (PBL). PBL has many advantages; enhanced self-directed learning and improved problem-solving and critical thinking skills. However, despite many advantages, PBL has challenges. Research confirmed is time-consuming and difficult to formulate ill-structured questions. Professional development interventions are needed for in-service educators to adopt the PBL strategy. The purposively selected educators had to implement PBL in their classrooms after the intervention to develop their practice and then reflect on the implementation. They had to indicate their achievements and challenges. This study differs from previous studies as the rural educators were subjected to implementing PBL in their classrooms and reflected on their experiences, beliefs, and attitudes regarding PBL. Theoretical Framework: The study reinforced Vygotskian sociocultural theory. According to Vygotsky, the development of a child's cognitive is sustained by the interaction between the child and more able peers in his immediate environment. The theory suggests that social interactions in small groups create an opportunity for learners to form concepts and skills on their own better than working individually. PBL emphasized learning in small groups. Research Methodology: An exploratory case study was employed. The reason is that the study was not necessarily for specific conclusive evidence. Non-probability purposive sampling was adopted to choose eight schools from 89 rural public schools. In each school, two educators were approached, teaching physical sciences in grades 10 and 11 (N = 16). The research instruments were questionnaires, interviews, and lesson observation protocol. Two open-ended questionnaires were developed before and after intervention and analyzed thematically. Three themes were identified. The semi-structured interviews and responses were coded and transcribed into three themes. Subsequently, the Reform Teaching Observation Protocol (RTOP) was adopted for lesson observation and was analyzed using five constructs. Results: Evidence from analyzing the questionnaires before and after the intervention shows that participants knew better what was required to develop an ill-structured problem during the implementation. Furthermore, indications from the interviews are that participants had positive views about the PBL strategy. They stated that they only act as facilitators, and learners’ problem-solving and critical thinking skills are enhanced. They suggested a change in curriculum to adopt the PBL strategy. However, most participants may not continue to apply the PBL strategy stating that it is time-consuming and difficult to complete the Annual Teaching Plan (ATP). They complained about materials and equipment and learners' readiness to work. Evidence from RTOP shows that after the intervention, participants learn to encourage exploration and use learners' questions and comments to determine the direction and focus of classroom discussions.Keywords: problem-solving, self-directed, critical thinking, intervention
Procedia PDF Downloads 119162 Method of Nursing Education: History Review
Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán
Abstract:
Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education
Procedia PDF Downloads 112161 A Critical Analysis of How the Role of the Imam Can Best Meet the Changing Social, Cultural, and Faith-Based Needs of Muslim Families in 21st Century Britain
Authors: Christine Hough, Eddie Abbott-Halpin, Tariq Mahmood, Jessica Giles
Abstract:
This paper draws together the findings from two research studies, each undertaken with cohorts of South Asian Muslim respondents located in the North of England between 2017 and 2019. The first study, entitled Faith Family and Crime (FFC), investigated the extent to which a Muslim family’s social and health well-being is affected by a family member’s involvement in the Criminal Justice System (CJS). This study captured a range of data through a detailed questionnaire and structured interviews. The data from the interview transcripts were analysed using open coding and an application of aspects of the grounded theory approach. The findings provide clear evidence that the respondents were neither well-informed nor supported throughout the processes of the CJS, from arrest to post-sentencing. These experiences gave rise to mental and physical stress, potentially unfair sentencing, and a significant breakdown in communication within the respondents’ families. They serve to highlight a particular aspect of complexity in the current needs of those South Asian Muslim families who find themselves involved in the CJS and is closely connected to family structure, culture, and faith. The second study, referred to throughout this paper as #ImamsBritain (that provides the majority of content for this paper), explores how Imams, in their role as community faith leaders, can best address the complex – and changing - needs of South Asian Muslims families, such as those that emerged in the findings from FFC. The changing socio-economic and political climates of the last thirty or so years have brought about significant changes to the lives of Muslim families, and these have created more complex levels of social, cultural, and faith-based needs for families and individuals. As a consequence, Imams now have much greater demands made of them, and so their role has undergone far-reaching changes in response to this. The #ImamsBritain respondents identified a pressing need to develop a wider range of pastoral and counseling skills, which they saw as extending far beyond the traditional role of the Imam as a religious teacher and spiritual guide. The #ImamsBritain project was conducted with a cohort of British Imams in the North of England. Data was collected firstly through a questionnaire that related to the respondents’ training and development needs and then analysed in depth using the Delphi approach. Through Delphi, the data were scrutinized in depth using interpretative content analysis. The findings from this project reflect the respondents’ individual perceptions of the kind of training and development they need to fulfill their role in 21st Century Britain. They also provide a unique framework for constructing a professional guide for Imams in Great Britain. The discussions and critical analyses in this paper draw on the discourses of professionalization and pastoral care and relevant reports and reviews on Imam training in Europe and Canada.Keywords: criminal justice system, faith and culture, Imams, Muslim community leadership, professionalization, South Asian family structure
Procedia PDF Downloads 138160 Quality Assessment of Pedestrian Streets in Iran: Case Study of Saf, Tehran
Authors: Fstemeh Rais Esmaili, Ehsan Ranjbar
Abstract:
Pedestrian streets as one type of urban public spaces have an important role in improving the quality of urban life. In Iran, planning and designing of pedestrian streets is in its primary steps. In spite of starting this approach in Iran, and designing several pedestrian streets, there are still not organized studies about quality assessment of pedestrian streets. As a result, the strength and weakness points of the initial experiences have not been utilized. This inattention to quality assessment have caused designing pedestrian streets to be limited to just vehicles traffic control and preliminary actions like paving; so that, special potentials of pedestrian streets for creating social, livable and dynamic public spaces have not been used. This article, as an organized study about quality assessment of pedestrian streets in Iran, tries to reach two main goals: first, introducing a framework for quality assessment of pedestrian streets in Iran, and second, creating a context for improving the quality of pedestrian streets especially for further experiences. The main research methods are description and context analyzing. With respect to comparative analysis of ideas about quality, considering international and local case studies and analyzing existing condition of Saf Pedestrian Street, a particular model for quality assessment has been introduced. In this model, main components and assessment criteria have been presented. On the basis of this model, questionnaire and checklist for assessment have been prepared. The questionnaire and interview have been used to assess qualities which are in direct contact with people and the checklist has been used for analyzing visual qualities by authors through observation. Some results of questionnaire and checklist show that 7 of 11 primary components, diversity, flexibility, cleanness, legibility and imaginably, identity, livability, form and physical setting are rated low and very low in quality degree. Three components, efficiency, comfort and distinctiveness, have medium and low quality degree and one component, access, linkage and permeability has high quality degree. Therefore, based on implemented analyzing process, Saf Pedestrian Street needs to be improved and these quality improvement priorities are determined based on presented criteria. Adaption of final results with existing condition illustrates the shortage of services for satisfying user’s needs, inflexibility and impossibility of using spaces in various times, lack of facilities for different climatic conditions, lack of facilities such as drinking fountain, inappropriate designing of existing urban furniture like garbage cans, and creating pollution and unsuitable view, lack of visual attractions, neglecting disabled persons in designing entrances, shortage of benches and their undesirable designing, lack of vegetation, absence of special characters making it different from other streets, preventing people taking part in the space causing lack of affiliation, lack of appropriate elements for leisure time and lack of exhilaration in the space. On the other hand, these results present high access and permeability, high safety, less sound pollution and more relief, comfortable movement along the way due to suitable pavement and economic efficiency, as the strength points of Saf pedestrian street.Keywords: pedestrian streets, quality assessment, quality criteria, Saf Pedestrian Street
Procedia PDF Downloads 255159 Epidemiological Patterns of Pediatric Fever of Unknown Origin
Authors: Arup Dutta, Badrul Alam, Sayed M. Wazed, Taslima Newaz, Srobonti Dutta
Abstract:
Background: In today's world, with modern science and contemporary technology, a lot of diseases may be quickly identified and ruled out, but children's fever of unknown origin (FUO) still presents diagnostic difficulties in clinical settings. Any fever that reaches 38 °C and lasts for more than seven days without a known cause is now classified as a fever of unknown origin (FUO). Despite tremendous progress in the medical sector, fever of unknown origin, or FOU, persists as a major health issue and a major contributor to morbidity and mortality, particularly in children, and its spectrum is sometimes unpredictable. The etiology is influenced by geographic location, age, socioeconomic level, frequency of antibiotic resistance, and genetic vulnerability. Since there are currently no known diagnostic algorithms, doctors are forced to evaluate each patient one at a time with extreme caution. A persistent fever poses difficulties for both the patient and the doctor. This prospective observational study was carried out in a Bangladeshi tertiary care hospital from June 2018 to May 2019 with the goal of identifying the epidemiological patterns of fever of unknown origin in pediatric patients. Methods: It was a hospital-based prospective observational study carried out on 106 children (between 2 months and 12 years) with prolonged fever of >38.0 °C lasting for more than 7 days without a clear source. Children with additional chronic diseases or known immunodeficiency problems were not allowed. Clinical practices that helped determine the definitive etiology were assessed. Initial testing included a complete blood count, a routine urine examination, PBF, a chest X-ray, CRP measurement, blood cultures, serology, and additional pertinent investigations. The analysis focused mostly on the etiological results. The standard program SPSS 21 was used to analyze all of the study data. Findings: A total of 106 patients identified as having FUO were assessed, with over half (57.5%) being female and the majority (40.6%) falling within the 1 to 3-year age range. The study categorized the etiological outcomes into five groups: infections, malignancies, connective tissue conditions, miscellaneous, and undiagnosed. In the group that was being studied, infections were found to be the main cause in 44.3% of cases. Undiagnosed cases came in at 31.1%, cancers at 10.4%, other causes at 8.5%, and connective tissue disorders at 4.7%. Hepato-splenomegaly was seen in people with enteric fever, malaria, acute lymphoid leukemia, lymphoma, and hepatic abscesses, either by itself or in combination with other conditions. About 53% of people who were not diagnosed also had hepato-splenomegaly at the same time. Conclusion: Infections are the primary cause of PUO (pyrexia of unknown origin) in children, with undiagnosed cases being the second most common cause. An incremental approach is beneficial in the process of diagnosing a condition. Non-invasive examinations are used to diagnose infections and connective tissue disorders, while invasive investigations are used to diagnose cancer and other ailments. According to this study, the prevalence of undiagnosed diseases is still remarkable, so extensive historical analysis and physical examinations are necessary in order to provide a precise diagnosis.Keywords: children, diagnostic challenges, fever of unknown origin, pediatric fever, undiagnosed diseases
Procedia PDF Downloads 27158 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System
Authors: Andy Alubaidy
Abstract:
In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing
Procedia PDF Downloads 217157 Entrepreneurial Venture Creation through Anchor Event Activities: Pop-Up Stores as On-Site Arenas
Authors: Birgit A. A. Solem, Kristin Bentsen
Abstract:
Scholarly attention in entrepreneurship is currently directed towards understanding entrepreneurial venture creation as a process -the journey of new economic activities from nonexistence to existence often studied through flow- or network models. To complement existing research on entrepreneurial venture creation with more interactivity-based research of organized activities, this study examines two pop-up stores as anchor events involving on-site activities of fifteen participating entrepreneurs launching their new ventures. The pop-up stores were arranged in two middle-sized Norwegian cities and contained different brand stores that brought together actors of sub-networks and communities executing venture creation activities. The pop-up stores became on-site arenas for the entrepreneurs to create, maintain, and rejuvenate their networks, at the same time as becoming venues for temporal coordination of activities involving existing and potential customers in their venture creation. In this work, we apply a conceptual framework based on frequently addressed dilemmas within entrepreneurship theory (discovery/creation, causation/effectuation) to further shed light on the broad aspect of on-site anchor event activities and their venture creation outcomes. The dilemma-based concepts are applied as an analytic toolkit to pursue answers regarding the nature of anchor event activities typically found within entrepreneurial venture creation and how these anchor event activities affect entrepreneurial venture creation outcomes. Our study combines researcher participation with 200 hours of observation and twenty in-depth interviews. Data analysis followed established guidelines for hermeneutic analysis and was intimately intertwined with ongoing data collection. Data was coded and categorized in NVivo 12 software, and iterated several times as patterns were steadily developing. Our findings suggest that core anchor event activities typically found within entrepreneurial venture creation are; a concept- and product experimentation with visitors, arrangements to socialize (evening specials, auctions, and exhibitions), store-in-store concepts, arranged meeting places for peers and close connection with municipality and property owners. Further, this work points to four main entrepreneurial venture creation outcomes derived from the core anchor event activities; (1) venture attention, (2) venture idea-realization, (3) venture collaboration, and (4) venture extension. Our findings show that, depending on which anchor event activities are applied, the outcomes vary. Theoretically, this study offers two main implications. First, anchor event activities are both discovered and created, following the logic of causation, at the same time as being experimental, based on “learning by doing” principles of effectuation during the execution. Second, our research enriches prior studies on venture creation as a process. In this work, entrepreneurial venture creation activities and outcomes are understood through pop-up stores as on-site anchor event arenas, particularly suitable for interactivity-based research requested by the entrepreneurship field. This study also reveals important managerial implications, such as that entrepreneurs should allow themselves to find creative physical venture creation arenas (e.g., pop-up stores, showrooms), as well as collaborate with partners when discovering and creating concepts and activities based on new ideas. In this way, they allow themselves to both strategically plan for- and continually experiment with their venture.Keywords: anchor event, interactivity-based research, pop-up store, entrepreneurial venture creation
Procedia PDF Downloads 91156 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 175155 Varieties of Capitalism and Small Business CSR: A Comparative Overview
Authors: Stéphanie Looser, Walter Wehrmeyer
Abstract:
Given the limited research on Small and Mediumsized Enterprises’ (SMEs) contribution to Corporate Social Responsibility (CSR) and even scarcer research on Swiss SMEs, this paper helps to fill these gaps by enabling the identification of supranational SME parameters and to make a contribution to the evolving field of these topics. Thus, the paper investigates the current state of SME practices in Switzerland and across 15 other countries. Combining the degree to which SMEs demonstrate an explicit (or business case) approach or see CSR as an implicit moral activity with the assessment of their attributes for “variety of capitalism” defines the framework of this comparative analysis. According to previous studies, liberal market economies, e.g. in the United States (US) or United Kingdom (UK), are aligned with extrinsic CSR, while coordinated market systems (in Central European or Asian countries) evolve implicit CSR agendas. To outline Swiss small business CSR patterns in particular, 40 SME owner-managers were interviewed. The transcribed interviews were coded utilising MAXQDA for qualitative content analysis. A secondary data analysis of results from different countries (i.e., Australia, Austria, Chile, Cameroon, Catalonia (notably a part of Spain that seeks autonomy), China, Finland, Germany, Hong Kong (a special administrative region of China), Italy, Netherlands, Singapore, Spain, Taiwan, UK, US) lays groundwork for this comparative study on small business CSR. Applying the same coding categories (in MAXQDA) for the interview analysis as well as for the secondary data research while following grounded theory rules to refine and keep track of ideas generated testable hypotheses and comparative power on implicit (and the lower likelihood of explicit) CSR in SMEs retrospectively. The paper identifies Swiss small business CSR as deep, profound, “soul”, and an implicit part of the day-to-day business. Similar to most Central European, Mediterranean, Nordic, and Asian countries, explicit CSR is still very rare in Swiss SMEs. Astonishingly, also UK and US SMEs follow this pattern in spite of their strong and distinct liberal market economies. Though other findings show that nationality matters this research concludes that SME culture and its informal CSR agenda are strongly formative and superseding even forces of market economies, nationally cultural patterns, and language. In a world of “big business”, explicit “business case” CSR, and the mantra that “CSR must pay”, this study points to a distinctly implicit small business CSR model built on trust, physical closeness, and virtues that is largely detached from the bottom line. This pattern holds for different cultural contexts and it is concluded that SME culture is stronger than nationality leading to a supra-national, monolithic SME CSR approach. Hence, classifications of countries by their market system or capitalism, as found in the comparative capitalism literature, do not match the CSR practices in SMEs as they do not mirror the peculiarities of their business. This raises questions on the universality and generalisability of management concepts.Keywords: CSR, comparative study, cultures of capitalism, small, medium-sized enterprises
Procedia PDF Downloads 433154 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments
Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño
Abstract:
Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.Keywords: heat transfer, heat treatment, mango, modeling and simulation
Procedia PDF Downloads 247153 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals
Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova
Abstract:
Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk
Procedia PDF Downloads 240152 A Clinico-Bacteriological Study and Their Risk Factors for Diabetic Foot Ulcer with Multidrug-Resistant Microorganisms in Eastern India
Authors: Pampita Chakraborty, Sukumar Mukherjee
Abstract:
This study was done to determine the bacteriological profile and antibiotic resistance of the isolates and to find out the potential risk factors for infection with multidrug-resistant organisms. Diabetic foot ulcer is a major medical, social, economic problem and a leading cause of morbidity and mortality, especially in the developing countries like India. 25 percent of all diabetic patients develop a foot ulcer at some point in their lives which is highly susceptible to infections and that spreads rapidly, leading to overwhelming tissue destruction and subsequent amputation. Infection with multidrug resistant organisms (MDRO) may increase the cost of management and may cause additional morbidity and mortality. Proper management of these infections requires appropriate antibiotic selection based on culture and antimicrobial susceptibility testing. Early diagnosis of microbial infections is aimed to institute the appropriate antibacterial therapy initiative to avoid further complications. A total of 200 Type 2 Diabetic Mellitus patients with infection were admitted at GD Hospital and Diabetes Institute, Kolkata. 60 of them who developed ulcer during the year 2013 were included in this study. A detailed clinical history and physical examination were carried out for every subject. Specimens for microbiological studies were obtained from ulcer region. Gram-negative bacilli were tested for extended spectrum Beta-lactamase (ESBL) production by double disc diffusion method. Staphylococcal isolates were tested for susceptibility to oxacillin by screen agar method and disc diffusion. Potential risk factors for MDRO-positive samples were explored. Gram-negative aerobes were most frequently isolated, followed by gram-positive aerobes. Males were predominant in the study and majority of the patients were in the age group of 41-60 years. The presence of neuropathy was observed in 80% cases followed by peripheral vascular disease (73%). Proteus spp. (22) was the most common pathogen isolated, followed by E.coli (17). Staphylococcus aureus was predominant amongst the gram-positive isolates. S.aureus showed a high rate of resistance to antibiotic tested (63.6%). Other gram-positive isolates were found to be highly resistant to erythromycin, tetracycline and ciprofloxacin, 40% each. All isolates were found to be sensitive to Vancomycin and Linezolid. ESBL production was noted in Proteus spp and E.coli. Approximately 70 % of the patients were positive for MDRO. MDRO-infected patients had poor glycemic control (HbA1c 11± 2). Infection with MDROs is common in diabetic foot ulcers and is associated with risk factors like inadequate glycemic control, the presence of neuropathy, osteomyelitis, ulcer size and increased the requirement for surgical treatment. There is a need for continuous surveillance of resistant bacteria to provide the basis for empirical therapy and reduce the risk of complications.Keywords: diabetic foot ulcer, bacterial infection, multidrug-resistant organism, extended spectrum beta-lactamase
Procedia PDF Downloads 337151 Typology of Fake News Dissemination Strategies in Social Networks in Social Events
Authors: Mohadese Oghbaee, Borna Firouzi
Abstract:
The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.Keywords: fake news, social network, social events, thematic analysis
Procedia PDF Downloads 63150 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 233149 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students
Authors: Nancy A. Delich, Stephen D. Roberts
Abstract:
This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students
Procedia PDF Downloads 351148 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 176147 Application of Pedicled Perforator Flaps in Large Cavities of the Breast
Authors: Neerja Gupta
Abstract:
Objective-Reconstruction of large cavities of the breast without contralateral symmetrisation Background- Reconstruction of breast includes a wide spectrum of procedures from displacement to regional and distant flaps. The pedicled Perforator flaps cover a wide spectrum of reconstruction surgery for all quadrants of the breast, especially in patients with comorbidities. These axial flaps singly or adjunct are based on a near constant perforator vessel, a ratio of 2:1 at its entry in a flap is good to maintain vascularity. The perforators of lateral chest wall viz LICAP, LTAP have overlapping perfurosomes without clear demarcation. LTAP is localized in the narrow zone between the lateral breast fold and anterior axillary line,2.5-3.8cm from the fold. MICAP are localized at 1-2 cm from sternum. Being 1-2mm in diameter, a Single perforator is good to maintain the flap. LICAP has a dominant perforator in 6th-11th spaces, while LTAP has higher placed dominant perforators in 4th and 5th spaces. Methodology-Six consecutive patients who underwent reconstruction of the breast with pedicled perforator flaps were retrospectively analysed. Selections of the flap was done based on the size and locations of the tumour, anticipated volume loss, willingness to undergo contralateral symmetrisation, cosmetic expectations, and finances available.3 patients underwent vertical LTAP, the distal limit of the flap being the inframammary crease. 3 patients underwent MICAP, oriented along the axis of rib, the distal limit being the anterior axillary line. Preoperative identification was done using a unidirectional hand held doppler. The flap was raised caudal to cranial, the pivot point of rotation being the vessel entry into the skin. The donor area is determined by the skin pinch. Flap harvest time was 20-25 minutes. Intra operative vascularity was assessed with dermal bleed. The patient immediate pre, post-operative and follow up pics were compared independently by two breast surgeons. Patients were given a breast Q questionnaire (licensed) for scoring. Results-The median age of six patients was 46. Each patient had a hospital stay of 24 hours. None of the patients was willing for contralateral symmetrisation. The specimen dimensions were from 8x6.8x4 cm to 19x16x9 cm. The breast volume reconstructed range was 30 percent to 45 percent. All wide excision had free margins on frozen. The mean flap dimensions were 12x5x4.5 cm. One LTAP underwent marginal necrosis and delayed wound healing due to seroma. Three patients were phyllodes, of which one was borderline, and 2 were benign on final histopathology. All other 3 patients were invasive ductal cancer and have completed their radiation. The median follow up is 7 months the satisfaction scores at median follow of 7 months are 90 for physical wellbeing and 85 for surgical results. Surgeons scored fair to good in Harvard score. Conclusion- Pedicled perforator flaps are a valuable option for 3/8th volume of breast defects. LTAP is preferred for tumours at the Central, upper, and outer quadrants of the breast and MICAP for the inner and lower quadrant. The vascularity of the flap is dependent on the angiosomalterritories; adequate venous and cavity drainage.Keywords: breast, oncoplasty, pedicled, perforator
Procedia PDF Downloads 187146 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals
Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen
Abstract:
Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.Keywords: care, learning, nursing, technology
Procedia PDF Downloads 136145 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment
Authors: Asif Ali, Daud Salim Faruquie
Abstract:
With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture
Procedia PDF Downloads 400144 Treatment of Neuronal Defects by Bone Marrow Stem Cells Differentiation to Neuronal Cells Cultured on Gelatin-PLGA Scaffolds Coated with Nano-Particles
Authors: Alireza Shams, Ali Zamanian, Atefehe Shamosi, Farnaz Ghorbani
Abstract:
Introduction: Although the application of a new strategy remains a remarkable challenge for treatment of disabilities due to neuronal defects, progress in Nanomedicine and tissue engineering, suggesting the new medical methods. One of the promising strategies for reconstruction and regeneration of nervous tissue is replacing of lost or damaged cells by specific scaffolds after Compressive, ischemic and traumatic injuries of central nervous system. Furthermore, ultrastructure, composition, and arrangement of tissue scaffolds are effective on cell grafts. We followed implantation and differentiation of mesenchyme stem cells to neural cells on Gelatin Polylactic-co-glycolic acid (PLGA) scaffolds coated with iron nanoparticles. The aim of this study was to evaluate the capability of stem cells to differentiate into motor neuron-like cells under topographical cues and morphogenic factors. Methods and Materials: Bone marrow mesenchymal stem cells (BMMSCs) was obtained by primary cell culturing of adult rat bone marrow got from femur bone by flushing method. BMMSCs were incubated with DMEM/F12 (Gibco), 15% FBS and 100 U/ml pen/strep as media. Then, BMMSCs seeded on Gel/PLGA scaffolds and tissue culture (TCP) polystyrene embedded and incorporated by Fe Nano particles (FeNPs) (Fe3o4 oxide (M w= 270.30 gr/mol.). For neuronal differentiation, 2×10 5 BMMSCs were seeded on Gel/PLGA/FeNPs scaffolds was cultured for 7 days and 0.5 µ mol. Retinoic acid, 100 µ mol. Ascorbic acid,10 ng/ml. Basic fibroblast growth factor (Sigma, USA), 250 μM Iso butyl methyl xanthine, 100 μM 2-mercaptoethanol, and 0.2 % B27 (Invitrogen, USA) added to media. Proliferation of BMMSCs was assessed by using MTT assay for cell survival. The morphology of BMMSCs and scaffolds was investigated by scanning electron microscopy analysis. Expression of neuron-specific markers was studied by immunohistochemistry method. Data were analyzed by analysis of variance, and statistical significance was determined by Turkey’s test. Results: Our results revealed that differentiation and survival of BMMSCs into motor neuron-like cells on Gel/PLGA/FeNPs as a biocompatible and biodegradable scaffolds were better than those cultured in Gel/PLGA in absence of FeNPs and TCP scaffolds. FeNPs had raised physical power but decreased capacity absorption of scaffolds. Well defined oriented pores in scaffolds due to FeNPs may activate differentiation and synchronized cells as a mechanoreceptor. Induction effects of magnetic FeNPs by One way flow of channels in scaffolds help to lead the cells and can facilitate direction of their growth processes. Discussion: Progression of biological properties of BMMSCs and the effects of FeNPs spreading under magnetic field was evaluated in this investigation. In vitro study showed that the Gel/PLGA/FeNPs scaffold provided a suitable structure for motor neuron-like cells differentiation. This could be a promising candidate for enhancing repair and regeneration in neural defects. Dynamic and static magnetic field for inducing and construction of cells can provide better results for further experimental studies.Keywords: differentiation, mesenchymal stem cells, nano particles, neuronal defects, Scaffolds
Procedia PDF Downloads 166143 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 281142 A Rapid Assessment of the Impacts of COVID-19 on Overseas Labor Migration: Findings from Bangladesh
Authors: Vaiddehi Bansal, Ridhi Sahai, Kareem Kysia
Abstract:
Overseas labor migration is currently one of the most important contributors to the economy of Bangladesh and is a highly profitable form of labor for Gulf Cooperative Council (GCC) countries. In 2019, 700,159 migrant workers from Bangladeshtraveled abroad for employment. GCC countries are a major destination for Bangladeshi migrant workers, with Saudi Arabia being the most common destination for Bangladeshi migrant workers since 2016. Despite the high rate of migration between these countries every year, the OLR industry remains complex and often leaves migrants susceptible to human trafficking, forced labor, and modern slavery. While the prevalence of forced labor among Bangladeshi migrants in GCC countries is still unknown, the IOM estimates international migrant workers comprise one fourth of the victims of forced labor. Moreover, the onset of the global COVID-19 pandemic has exposed migrant workers to additional adverse situations, making them even more vulnerable to forced labor and health risks. This paper presents findings from a rapid assessment of the impacts of COVID-19 on OLR in Bangladesh, with an emphasis on the increased risk of forced labor among vulnerable migrant worker populations, particularly women.Rapid reviews are a useful approach to swiftly provide actionable evidence for informed decision-making during emergencies, such as the COVID-19 pandemic. The research team conducted semi-structured key information interviews (KIIs) with a range of stakeholders, including government officials, local NGOs, international organizations, migration researchers, and formal and informal recruiting agencies, to obtain insights on the multi-facted impacts of COVID-19 on the OLR sector. The research team also conducted a comprehensive review of available resources, including media articles, blogs, policy briefs, reports, white papers, and other online content, to triangulate findings from the KIIs. After screening for inclusion criteria, a total of 110 grey literature documents were included in the review. A total of 31 KIIs were conducted, data from which was transcribed and translated from Bangla to English, andanalyzed using a detailed codebook. Findings indicate that there was limited reintegration support for returnee migrants. Facing increasing amounts of debt, financial insecurity, and social discrimination, returnee migrants, were extremely vulnerable to forced labor and exploitation. Growing financial debt and limited job opportunities in their home country will likely push migrants to resort to unsafe migration channels. Evidence suggests that women, who are primarily domestic works in GCC countries, were exposed to increased risk of forced labor and workplace violence. Due to stay-at-home measures, women migrant workers were tasked with additional housekeeping working and subjected to longer work hours, wage withholding, and physical abuse. In Bangladesh, returnee women migrant workers also faced an increased risk of domestic violence.Keywords: forced labor, migration, gender, human trafficking
Procedia PDF Downloads 115141 Exploring 3-D Virtual Art Spaces: Engaging Student Communities Through Feedback and Exhibitions
Authors: Zena Tredinnick-Kirby, Anna Divinsky, Brendan Berthold, Nicole Cingolani
Abstract:
Faculty members from The Pennsylvania State University, Zena Tredinnick-Kirby, Ph.D., and Anna Divinsky are at the forefront of an innovative educational approach to improve access in asynchronous online art courses. Their pioneering work weaves virtual reality (VR) technologies to construct a more equitable educational experience for students by transforming their learning and engagement. The significance of their study lies in the need to bridge the digital divide in online art courses, making them more inclusive and interactive for all distance learners. In an era where conventional classroom settings are no longer the sole means of instruction, Tredinnick-Kirby and Divinsky harness the power of instructional technologies to break down geographical barriers by incorporating an interactive VR experience that facilitates community building within an online environment transcending physical constraints. The methodology adopted by Tredinnick-Kirby, and Divinsky is centered around integrating 3D virtual spaces into their art courses. Spatial.io, a virtual world platform, enables students to develop digital avatars and engage in virtual art museums through a free browser-based program or an Oculus headset, where they can interact with other visitors and critique each other’s artwork. The goal is not only to provide students with an engaging and immersive learning experience but also to nourish them with a more profound understanding of the language of art criticism and technology. Furthermore, the study aims to cultivate critical thinking skills among students and foster a collaborative spirit. By leveraging cutting-edge VR technology, students are encouraged to explore the possibilities of their field, experimenting with innovative tools and techniques. This approach not only enriches their learning experience but also prepares them for a dynamic and ever-evolving art landscape in technology and education. One of the fundamental objectives of Tredinnick-Kirby and Divinsky is to remodel how feedback is derived through peer-to-peer art critique. Through the inclusion of 3D virtual spaces into the curriculum, students now have the opportunity to install their final artwork in a virtual gallery space and incorporate peer feedback, enabling students to exhibit their work opening the doors to a collaborative and interactive process. Students can provide constructive suggestions, engage in discussions, and integrate peer commentary into developing their ideas and praxis. This approach not only accelerates the learning process but also promotes a sense of community and growth. In summary, the study conducted by the Penn State faculty members Zena Tredinnick-Kirby, and Anna Divinsky represents innovative use of technology in their courses. By incorporating 3D virtual spaces, they are enriching the learners' experience. Through this inventive pedagogical technique, they nurture critical thinking, collaboration, and the practical application of cutting-edge technology in art. This research holds great promise for the future of online art education, transforming it into a dynamic, inclusive, and interactive experience that transcends the confines of distance learning.Keywords: Art, community building, distance learning, virtual reality
Procedia PDF Downloads 71140 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 72139 The Relevance of Community Involvement in Flood Risk Governance Towards Resilience to Groundwater Flooding. A Case Study of Project Groundwater Buckinghamshire, UK
Authors: Claude Nsobya, Alice Moncaster, Karen Potter, Jed Ramsay
Abstract:
The shift in Flood Risk Governance (FRG) has moved away from traditional approaches that solely relied on centralized decision-making and structural flood defenses. Instead, there is now the adoption of integrated flood risk management measures that involve various actors and stakeholders. This new approach emphasizes people-centered approaches, including adaptation and learning. This shift to a diversity of FRG approaches has been identified as a significant factor in enhancing resilience. Resilience here refers to a community's ability to withstand, absorb, recover, adapt, and potentially transform in the face of flood events. It is argued that if the FRG merely focused on the conventional 'fighting the water' - flood defense - communities would not be resilient. The move to these people-centered approaches also implies that communities will be more involved in FRG. It is suggested that effective flood risk governance influences resilience through meaningful community involvement, and effective community engagement is vital in shaping community resilience to floods. Successful community participation not only uses context-specific indigenous knowledge but also develops a sense of ownership and responsibility. Through capacity development initiatives, it can also raise awareness and all these help in building resilience. Recent Flood Risk Management (FRM) projects have thus had increasing community involvement, with varied conceptualizations of such community engagement in the academic literature on FRM. In the context of overland floods, there has been a substantial body of literature on Flood Risk Governance and Management. Yet, groundwater flooding has gotten little attention despite its unique qualities, such as its persistence for weeks or months, slow onset, and near-invisibility. There has been a little study in this area on how successful community involvement in Flood Risk Governance may improve community resilience to groundwater flooding in particular. This paper focuses on a case study of a flood risk management project in the United Kingdom. Buckinghamshire Council is leading Project Groundwater, which is one of 25 significant initiatives sponsored by England's Department for Environment, Food and Rural Affairs (DEFRA) Flood and Coastal Resilience Innovation Programme. DEFRA awarded Buckinghamshire Council and other councils 150 million to collaborate with communities and implement innovative methods to increase resilience to groundwater flooding. Based on a literature review, this paper proposes a new paradigm for effective community engagement in Flood Risk Governance (FRG). This study contends that effective community participation can have an impact on various resilience capacities identified in the literature, including social capital, institutional capital, physical capital, natural capital, human capital, and economic capital. In the case of social capital, for example, successful community engagement can influence social capital through the process of social learning as well as through developing social networks and trust values, which are vital in influencing communities' capacity to resist, absorb, recover, and adapt. The study examines community engagement in Project Groundwater using surveys with local communities and documentary analysis to test this notion. The outcomes of the study will inform community involvement activities in Project Groundwater and may shape DEFRA policies and guidelines for community engagement in FRM.Keywords: flood risk governance, community, resilience, groundwater flooding
Procedia PDF Downloads 70138 Using Low-Calorie Gas to Generate Heat and Electricity
Authors: Аndrey Marchenko, Oleg Linkov, Alexander Osetrov, Sergiy Kravchenko
Abstract:
The low-calorie of gases include biogas, coal gas, coke oven gas, associated petroleum gas, gases sewage, etc. These gases are usually released into the atmosphere or burned on flares, causing substantial damage to the environment. However, with the right approach, low-calorie gas fuel can become a valuable source of energy. Specified determines the relevance of areas related to the development of low-calorific gas utilization technologies. As an example, in the work considered one of way of utilization of coalmine gas, because Ukraine ranks fourth in the world in terms of coal mine gas emission (4.7% of total global emissions, or 1.2 billion m³ per year). Experts estimate that coal mine gas is actively released in the 70-80 percent of existing mines in Ukraine. The main component of coal mine gas is methane (25-60%) Methane in 21 times has a greater impact on the greenhouse effect than carbon dioxide disposal problem has become increasingly important in the context of the increasing need to address the problems of climate, ecology and environmental protection. So marked causes negative effect of both local and global nature. The efforts of the United Nations and the World Bank led to the adoption of the program 'Zero Routine Flaring by 2030' dedicated to the cessation of these gases burn in flares and disposing them with the ability to generate heat and electricity. This study proposes to use coal gas as a fuel for gas engines to generate heat and electricity. Analyzed the physical-chemical properties of low-calorie gas fuels were allowed to choose a suitable engine, as well as estimate the influence of the composition of the fuel at its techno-economic indicators. Most suitable for low-calorie gas is engine with pre-combustion chamber jet ignition. In Ukraine is accumulated extensive experience in exploitation and production of gas engines with capacity of 1100 kW type GD100 (10GDN 207/2 * 254) fueled by natural gas. By using system pre- combustion chamber jet ignition and quality control in the engines type GD100 introduces the concept of burning depleted burn fuel mixtures, which in turn leads to decrease in the concentration of harmful substances of exhaust gases. The main problems of coal mine gas as a fuel for ICE is low calorific value, the presence of components that adversely affect combustion processes and terms of operation of the ICE, the instability of the composition, weak ignition. In some cases, these problems can be solved by adaptation engine design using coal mine gas as fuel (changing compression ratio, fuel injection quantity increases, change ignition time, increase energy plugs, etc.). It is shown that the use of coal mine gas engines with prechamber has not led to significant changes in the indicator parameters (ηi = 0.43 - 0.45). However, this significantly increases the volumetric fuel consumption, which requires increased fuel injection quantity to ensure constant nominal engine power. Thus, the utilization of low-calorie gas fuels in stationary gas engine type-based GD100 will significantly reduce emissions of harmful substances into the atmosphere when the generate cheap electricity and heat.Keywords: gas engine, low-calorie gas, methane, pre-combustion chamber, utilization
Procedia PDF Downloads 264