Search results for: economic approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18635

Search results for: economic approach

95 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study

Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet

Abstract:

These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.

Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment

Procedia PDF Downloads 37
94 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 186
93 Assessing Organizational Resilience Capacity to Flooding: Index Development and Application to Greek Small & Medium-Sized Enterprises

Authors: Antonis Skouloudis, Konstantinos Evangelinos, Walter Leal-Filho, Panagiotis Vouros, Ioannis Nikolaou

Abstract:

Organizational resilience capacity to extreme weather events (EWEs) has sparked a growth in scholarly attention over the past decade as an essential aspect in business continuity management, with supporting evidence for this claim to suggest that it retains a key role in successful responses to adverse situations, crises and shocks. Small and medium-sized enterprises (SMEs) are more vulnerable to face floods compared to their larger counterparts, so they are disproportionately affected by such extreme weather events. The limited resources at their disposal, the lack of time and skills all conduce to inadequate preparedness to challenges posed by floods. SMEs tend to plan in the short-term, reacting to circumstances as they arise and focussing on their very survival. Likewise, they share less formalised structures and codified policies while they are most usually owner-managed, resulting in a command-and-control management culture. Such characteristics result in them having limited opportunities to recover from flooding and quickly turnaround their operation from a loss making to a profit making one. Scholars frame the capacity of business entities to be resilient upon an EWE disturbance (such as flash floods) as the rate of recovery and restoration of organizational performance to pre-disturbance conditions, the amount of disturbance (i.e. threshold level) a business can absorb before losing structural and/or functional components that will alter or cease operation, as well as the extent to which the organization maintains its function (i.e. impact resistance) before performance levels are driven to zero. Nevertheless, while it seems to be accepted as an essential trait of firms effectively transcending uncertain conditions, research deconstructing the enabling conditions and/or inhibitory factors of SMEs resilience capacity to natural hazards is still sparse, fragmentary and mostly fuelled by anecdotal evidence or normative assumptions. Focusing on the individual level of analysis, i.e. the individual enterprise and its endeavours to succeed, the emergent picture from this relatively new research strand delineates the specification of variables, conceptual relationships or dynamic boundaries of resilience capacity components in an attempt to provide prescriptions for policy-making as well as business management. This study will present the development of a flood resilience capacity index (FRCI) and its application to Greek SMEs. The proposed composite indicator pertains to cognitive, behavioral/managerial and contextual factors that influence an enterprise’s ability to shape effective responses to meet flood challenges. Through the proposed indicator-based approach, an analytical framework is set forth that will help standardize such assessments with the overarching aim of reducing the vulnerability of SMEs to flooding. This will be achieved by identifying major internal and external attributes explaining resilience capacity which is particularly important given the limited resources these enterprises have and that they tend to be primary sources of vulnerabilities in supply chain networks, generating Single Points of Failure (SPOF).

Keywords: Floods, Small & Medium-Sized enterprises, organizational resilience capacity, index development

Procedia PDF Downloads 157
92 Shakespeare's Hamlet in Ballet: Transformation of an Archival Recording of a Neoclassical Ballet Performance into a Contemporary Transmodern Dance Video Applying Postmodern Concepts and Techniques

Authors: Svebor Secak

Abstract:

This four-year artistic research project hosted by the University of New England, Australia has set the goal to experiment with non-conventional ways of presenting a language-based narrative in dance using insights of recent theoretical writing on performance, addressing the research question: How to transform an archival recording of a neoclassical ballet performance into a new artistic dance video by implementing postmodern philosophical concepts? The Creative Practice component takes the form of a dance video Hamlet Revisited which is a reworking of the archival recording of the neoclassical ballet Hamlet, augmented by new material, produced using resources, technicians and dancers of the Croatian National Theatre in Zagreb. The methodology for the creation of Hamlet Revisited consisted of extensive field and desk research after which three dancers were shown the recording of original Hamlet and then created their artistic response to it based on their reception and appreciation of it. The dancers responded differently, based upon their diverse dancing backgrounds and life experiences. They began in the role of the audience observing video of the original ballet and transformed into the role of the choreographer-performer. Their newly recorded material was edited and juxtaposed with the archival recording of Hamlet and other relevant footage, allowing for postmodern features such as aleatoric content, synchronicity, eclecticism and serendipity, that way establishing communication on a receptive reader-response basis, thus blending the roles of the choreographer, performer and spectator, creating an original work of art whose significance lies in the relationship and communication between styles, old and new choreographic approaches, artists and audiences and the transformation of their traditional roles and relationships. In editing and collating, the following techniques were used with the intention to avoid the singular narrative: fragmentation, repetition, reverse-motion, multiplication of images, split screen, overlaying X-rays, image scratching, slow-motion, freeze-frame and simultaneity. Key postmodern concepts considered were: deconstruction, diffuse authorship, supplementation, simulacrum, self-reflexivity, questioning the role of the author, intertextuality and incredulity toward grand narratives - departing from the original story, thus personalising its ontological themes. From a broad brush of diverse concepts and techniques applied in an almost prescriptive manner, the project focuses on intertextuality that proves to be valid on at least two levels. The first is the possibility of a more objective analysis in combination with a semiotic structuralist approach moving from strict relationships between signs to a multiplication of signifiers, considering the dance text as an open construction, containing the elusive and enigmatic quality of art that leaves the interpretive position open. The second one is the creation of the new work where the author functions as the editor, aware and conscious of the interplay of disparate texts and their sources which co-act in the mind during the creative process. It is argued here that the eclectic combination of the old and new material through constant oscillations of different discourses upon the same topic resulted in a transmodern integrationist recent work of art that might be applied as a model for reconsidering existing choreographic creations.

Keywords: Ballet Hamlet, intertextuality, transformation, transmodern dance video

Procedia PDF Downloads 222
91 Transitioning Towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges

Authors: Atefeh Salehipoor

Abstract:

Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: 1. Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. 2. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. 3. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. 4. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. 5. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. 6. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.

Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension

Procedia PDF Downloads 50
90 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 154
89 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications

Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi

Abstract:

With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.

Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality

Procedia PDF Downloads 40
88 A Digital Clone of an Irrigation Network Based on Hardware/Software Simulation

Authors: Pierre-Andre Mudry, Jean Decaix, Jeremy Schmid, Cesar Papilloud, Cecile Munch-Alligne

Abstract:

In most of the Swiss Alpine regions, the availability of water resources is usually adequate even in times of drought, as evidenced by the 2003 and 2018 summers. Indeed, important natural stocks are for the moment available in the form of snow and ice, but the situation is likely to change in the future due to global and regional climate change. In addition, alpine mountain regions are areas where climate change will be felt very rapidly and with high intensity. For instance, the ice regime of these regions has already been affected in recent years with a modification of the monthly availability and extreme events of precipitations. The current research, focusing on the municipality of Val de Bagnes, located in the canton of Valais, Switzerland, is part of a project led by the Altis company and achieved in collaboration with WSL, BlueArk Entremont, and HES-SO Valais-Wallis. In this region, water occupies a key position notably for winter and summer tourism. Thus, multiple actors want to apprehend the future needs and availabilities of water, on both the 2050 and 2100 horizons, in order to plan the modifications to the water supply and distribution networks. For those changes to be salient and efficient, a good knowledge of the current water distribution networks is of most importance. In the current case, the water drinking network is well documented, but this is not the case for the irrigation one. Since the water consumption for irrigation is ten times higher than for drinking water, data acquisition on the irrigation network is a major point to determine future scenarios. This paper first presents the instrumentation and simulation of the irrigation network using custom-designed IoT devices, which are coupled with a digital clone simulated to reduce the number of measuring locations. The developed IoT ad-hoc devices are energy-autonomous and can measure flows and pressures using industrial sensors such as calorimetric water flow meters. Measurements are periodically transmitted using the LoRaWAN protocol over a dedicated infrastructure deployed in the municipality. The gathered values can then be visualized in real-time on a dashboard, which also provides historical data for analysis. In a second phase, a digital clone of the irrigation network was modeled using EPANET, a software for water distribution systems that performs extended-period simulations of flows and pressures in pressurized networks composed of reservoirs, pipes, junctions, and sinks. As a preliminary work, only a part of the irrigation network was modelled and validated by comparisons with the measurements. The simulations are carried out by imposing the consumption of water at several locations. The validation is performed by comparing the simulated pressures are different nodes with the measured ones. An accuracy of +/- 15% is observed on most of the nodes, which is acceptable for the operator of the network and demonstrates the validity of the approach. Future steps will focus on the deployment of the measurement devices on the whole network and the complete modelling of the network. Then, scenarios of future consumption will be investigated. Acknowledgment— The authors would like to thank the Swiss Federal Office for Environment (FOEN), the Swiss Federal Office for Agriculture (OFAG) for their financial supports, and ALTIS for the technical support, this project being part of the Swiss Pilot program 'Adaptation aux changements climatiques'.

Keywords: hydraulic digital clone, IoT water monitoring, LoRaWAN water measurements, EPANET, irrigation network

Procedia PDF Downloads 112
87 Enabling Rather Than Managing: Organizational and Cultural Innovation Mechanisms in a Heterarchical Organization

Authors: Sarah M. Schoellhammer, Stephen Gibb

Abstract:

Bureaucracy, in particular, its core element, a formal and stable hierarchy of authority, is proving less and less appropriate under the conditions of today’s knowledge economy. Centralization and formalization were consistently found to hinder innovation, undermining cross-functional collaboration, personal responsibility, and flexibility. With its focus on systematical planning, controlling and monitoring the development of new or improved solutions for customers, even innovation management as a discipline is to a significant extent based on a mechanistic understanding of organizations. The most important drivers of innovation, human creativity, and initiative, however, can be more hindered than supported by central elements of classic innovation management, such as predefined innovation strategies, rigid stage gate processes, and decisions made in management gate meetings. Heterarchy, as an alternative network form of organization, is essentially characterized by its dynamic influence structures, whereby the biggest influence is allocated by the collective to the persons perceived the most competent in a certain issue. Theoretical arguments that the non-hierarchical concept better supports innovation than bureaucracy have been supported by empirical research. These prior studies either focus on the structure and general functioning of non-hierarchical organizations or on their innovativeness, that means innovation as an outcome. Complementing classic innovation management approaches, this work aims to shed light on how innovations are initiated and realized in heterarchies in order to identify alternative solutions practiced under conditions of the post-bureaucratic organization. Through an initial individual case study, which is part of a multiple-case project, the innovation practices of an innovative and highly heterarchical medium-sized company in the German fire engineering industry are investigated. In a pragmatic mixed methods approach media resonance, company documents, and workspace architecture are analyzed, in addition to qualitative interviews with the CEO and employees of the case company, as well as a quantitative survey aiming to characterize the company along five scaled dimensions of a heterarchy spectrum. The analysis reveals some similarities and striking differences to approaches suggested by classic innovation management. The studied heterarchy has no predefined innovation strategy guiding new product and service development. Instead, strategic direction is provided by the CEO, described as visionary and creative. Procedures for innovation are hardly formalized, with new product ideas being evaluated on the basis of gut feeling and flexible, rather general criteria. Employees still being hesitant to take responsibility and make decisions, hierarchical influence is still prominent. Described as open-minded and collaborative, culture and leadership were found largely congruent with definitions of innovation culture. Overall, innovation efforts at the case company tend to be coordinated more through cultural than through formal organizational mechanisms. To better enable innovation in mainstream organizations, responsible practitioners are recommended not to limit changes to reducing the central elements of the bureaucratic organization, formalization, and centralization. The freedoms this entails need to be sustained through cultural coordination mechanisms, with personal initiative and responsibility by employees as well as common innovation-supportive norms and values. These allow to integrate diverse competencies, opinions, and activities and, thus, to guide innovation efforts.

Keywords: bureaucracy, heterarchy, innovation management, values

Procedia PDF Downloads 163
86 Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making

Authors: Thomas Rockstuhl, Kok Yee Ng, Guido Gianasso, Soon Ang

Abstract:

Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace.

Keywords: intercultural conflict familiarity, job performance, judgment and decision making, situational judgment test

Procedia PDF Downloads 145
85 Tailoring Workspaces for Generation Z: Harmonizing Teamwork, Privacy, and Connectivity

Authors: Maayan Nakash

Abstract:

The modern workplace is undergoing a revolution, with Generation Z (Gen-Z) at the forefront of this transformative shift. However, empirical investigations specifically targeting the workplace preferences of this generation remain limited. Through direct examination of their tendencies via a survey approach, this study offers vital insights for aligning organizational policies and practices. The results presented in this paper are part of a comprehensive study that explored Gen Z's viewpoints on various employment market aspects, likely to decisively influence the design of future work environments. Data were collected via an online survey distributed among a cohort of 461 individuals from Gen-Z, born between the mid-1990s and 2010, consisting of 241 males (52.28%) and 220 females (47.72%). Responses were gauged using Likert scale statements that probed preferences for teamwork versus individual work, virtual versus personal interactions, and open versus private workspaces. Descriptive statistics and analytical analyses were conducted to pinpoint key patterns. We discovered that a high proportion of respondents (81.99%, n=378) exhibited a preference for teamwork over individual work. Correspondingly, the data indicate strong support for the recognition of team-based tasks as a tool contributing to personal and professional development. In terms of communication, the majority of respondents (61.38%) either disagreed (n=154) or slightly agreed (n=129) with the exclusive reliance on virtual interactions with their organizational peers. This finding underscores that despite technological progress, digital natives place significant value on physical interaction and non-mediated communication. Moreover, we understand that they also value a quiet and private work environment, clearly preferring it over open and shared workspaces. Considering that Gen-Z does not necessarily experience high levels of stress within social frameworks in the workplace, this can be attributed to a desire for a space that allows for focused engagement with work tasks. A One-Sample Chi-Square Test was performed on the observed distribution of respondents' reactions to each examined statement. The results showed statistically significant deviations from a uniform distribution (p<.001), indicating that the response patterns did not occur by chance and that there were meaningful tendencies in the participants' responses. The findings expand the theoretical knowledge base on human resources in the dynamics of a multi-generational workforce, illuminating the values, approaches, and expectations of Gen-Z. Practically, the results may lead organizations to equip themselves with tools to create policies tailored to Gen-Z in the context of workspaces and social needs, which could potentially foster a fertile environment and aid in attracting and retaining young talent. Future studies might include investigating potential mitigating factors, such as cultural influences or individual personality traits, which could further clarify the nuances in Gen-Z's work style preferences. Longitudinal studies tracking changes in these preferences as the generation matures may also yield valuable insights. Ultimately, as the landscape of the workforce continues to evolve, ongoing investigations into the unique characteristics and aspirations of emerging generations remain essential for nurturing harmonious, productive, and future-ready organizational environments.

Keywords: workplace, future of work, generation Z, digital natives, human resources management

Procedia PDF Downloads 18
84 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 155
83 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research

Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova

Abstract:

The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.

Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research

Procedia PDF Downloads 365
82 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App

Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira

Abstract:

The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.

Keywords: travel app, behavior change, persuasive technology, travel information, causality

Procedia PDF Downloads 113
81 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings

Authors: Mukhtar Maigari

Abstract:

The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.

Keywords: BIM, POE, IEQ, HE-buildings

Procedia PDF Downloads 22
80 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 75
79 Determination of Aquifer Geometry Using Geophysical Methods: A Case Study from Sidi Bouzid Basin, Central Tunisia

Authors: Dhekra Khazri, Hakim Gabtni

Abstract:

Because of Sidi Bouzid water table overexploitation, this study aims at integrating geophysical methods to determinate aquifers geometry assessing their geological situation and geophysical characteristics. However in highly tectonic zones controlled by Atlassic structural features with NE-SW major directions (central Tunisia), Bouguer gravimetric responses of some areas can be as much dominated by the regional structural tendency, as being non-identified or either defectively interpreted such as the case of Sidi Bouzid basin. This issue required a residual gravity anomaly elaboration isolating the Sidi Bouzid basin gravity response ranging between -8 and -14 mGal and crucial for its aquifers geometry characterization. Several gravity techniques helped constructing the Sidi Bouzid basin's residual gravity anomaly, such as Upwards continuation compared to polynomial regression trends and power spectrum analysis detecting deep basement sources at (3km), intermediate (2km) and shallow sources (1km). A 3D Euler Deconvolution was also performed detecting deepest accidents trending NE-SW, N-S and E-W with depth values reaching 5500 m and delineating the main outcropping structures of the study area. Further gravity treatments highlighted the subsurface geometry and structural features of Sidi Bouzid basin over Horizontal and vertical gradient, and also filters based on them such as Tilt angle and Source Edge detector locating rooted edges or peaks from potential field data detecting a new E-W lineament compartmentalizing the Sidi Bouzid gutter into two unequally residual anomaly and subsiding domains. This subsurface morphology is also detected by the used 2D seismic reflection sections defining the Sidi Bouzid basin as a deep gutter within a tectonic set of negative flower structures, and collapsed and tilted blocks. Furthermore, these structural features were confirmed by forward gravity modeling process over several modeled residual gravity profiles crossing the main area. Sidi Bouzid basin (central Tunisia) is also of a big interest cause of the unknown total thickness and the undefined substratum of its siliciclastic Tertiary package, and its aquifers unbounded structural subsurface features and deep accidents. The Combination of geological, hydrogeological and geophysical methods is then of an ultimate need. Therefore, a geophysical methods integration based on gravity survey supporting available seismic data through forward gravity modeling, enhanced lateral and vertical extent definition of the basin's complex sedimentary fill via 3D gravity models, improved depth estimation by a depth to basement modeling approach, and provided 3D isochronous seismic mapping visualization of the basin's Tertiary complex refining its geostructural schema. A subsurface basin geomorphology mapping, over an ultimate matching between the basin's residual gravity map and the calculated theoretical signature map, was also displayed over the modeled residual gravity profiles. An ultimate multidisciplinary geophysical study of the Sidi Bouzid basin aquifers can be accomplished via an aeromagnetic survey and a 4D Microgravity reservoir monitoring offering temporal tracking of the target aquifer's subsurface fluid dynamics enhancing and rationalizing future groundwater exploitation in this arid area of central Tunisia.

Keywords: aquifer geometry, geophysics, 3D gravity modeling, improved depths, source edge detector

Procedia PDF Downloads 256
78 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 383
77 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 140
76 Confirming the Factors of Professional Readiness in Athletic Training

Authors: Philip A. Szlosek, M. Susan Guyer, Mary G. Barnum, Elizabeth M. Mullin

Abstract:

In the United States, athletic training is a healthcare profession that encompasses the prevention, examination, diagnosis, treatment, and rehabilitation of injuries and medical conditions. Athletic trainers work under the direction of or in collaboration with a physician and are recognized by the American Medical Association as allied healthcare professionals. Internationally, this profession is often known as athletic therapy. As healthcare professionals, athletic trainers must be prepared for autonomous practice immediately after graduation. However, new athletic trainers have been shown to have clinical areas of strength and weakness.To better assess professional readiness and improve the preparedness of new athletic trainers, the factors of athletic training professional readiness must be defined. Limited research exists defining the holistic aspects of professional readiness needed for athletic trainers. Confirming the factors of professional readiness in athletic training could enhance the professional preparation of athletic trainers and result in more highly prepared new professionals. The objective of this study was to further explore and confirm the factors of professional readiness in athletic training. Authors useda qualitative design based in grounded theory. Participants included athletic trainers with greater than 24 months of experience from a variety of work settings from each district of the National Athletic Trainer’s Association. Participants took the demographic questionnaire electronically using Qualtrics Survey Software (Provo UT). After completing the demographic questionnaire, 20 participants were selected to complete one-on-one interviews using GoToMeeting audiovisual web conferencing software. IBM Statistical Package for the Social Sciences (SPSS, v. 21.0) was used to calculate descriptive statistics for participant demographics. The first author transcribed all interviews verbatim and utilized a grounded theory approach during qualitative data analysis. Data were analyzed using a constant comparative analysis and open and axial coding. Trustworthiness was established using reflexivity, member checks, and peer reviews. Analysis revealed four overarching themes, including management, interpersonal relations, clinical decision-making, and confidence. Management was categorized as athletic training services not involving direct patient care and was divided into three subthemes, including administration skills, advocacy, and time management. Interpersonal Relations was categorized as the need and ability of the athletic trainer to properly interact with others. Interpersonal relations was divided into three subthemes, including personality traits, communication, and collaborative practice. Clinical decision-making was categorized as the skills and attributes required by the athletic trainer whenmaking clinical decisions related to patient care. Clinical decision-making was divided into three subthemes including clinical skills, continuing education, and reflective practice. The final theme was confidence. Participants discussed the importance of confidence regarding relationships building, clinical and administrative duties, and clinical decision-making. Overall, participants explained the value of a well-rounded athletic trainer and emphasized that athletic trainers need communication and organizational skills, the ability to collaborate, and must value self-reflection and continuing education in addition to having clinical expertise. Future research should finalize a comprehensive model of professional readiness for athletic training, develop a holistic assessment instrument for athletic training professional readiness, and explore the preparedness of new athletic trainers.

Keywords: autonomous practice, newly certified athletic trainer, preparedness for professional practice, transition to practice skills

Procedia PDF Downloads 114
75 Addressing Microbial Contamination in East Hararghe, Oromia, Ethiopia: Improving Water Sanitation Infrastructure and Promoting Safe Water Practices for Enhanced Food Safety

Authors: Tuji Jemal Ahmed, Hussen Beker Yusuf

Abstract:

Food safety is a major concern worldwide, with microbial contamination being one of the leading causes of foodborne illnesses. In Ethiopia, drinking water and untreated groundwater are a primary source of microbial contamination, leading to significant health risks. East Hararghe, Oromia, is one of the regions in Ethiopia that has been affected by this problem. This paper provides an overview of the impact of untreated groundwater on human health in Haramaya Rural District, East Hararghe and highlights the urgent need for sustained efforts to address the water sanitation supply problem. The use of untreated groundwater for drinking and household purposes in Haramaya Rural District, East Hararghe is prevalent, leading to high rates of waterborne illnesses such as diarrhea, typhoid fever, and cholera. The impact of these illnesses on human health is significant, resulting in significant morbidity and mortality, especially among vulnerable populations such as children and the elderly. In addition to the direct health impacts, waterborne illnesses also have indirect impacts on human health, such as reduced productivity and increased healthcare costs. Groundwater sources are susceptible to microbial contamination due to the infiltration of surface water, human and animal waste, and agricultural runoff. In Haramaya Rural District, East Hararghe, poor water management practices, inadequate sanitation facilities, and limited access to clean water sources contribute to the prevalence of untreated groundwater as a primary source of drinking water. These underlying causes of microbial contamination highlight the need for improved water sanitation infrastructure, including better access to safe drinking water sources and the implementation of effective treatment methods. The paper emphasizes the need for regular water quality monitoring, especially for untreated groundwater sources, to ensure safe drinking water for the population. The implementation of effective preventive measures, such as the use of effective disinfectants, proper waste disposal methods, and regular water quality monitoring, is crucial to reducing the risk of contamination and improving public health outcomes in the region. Community education and awareness-raising campaigns can also play a critical role in promoting safe water practices and reducing the risk of contamination. These campaigns can include educating the population on the importance of boiling water before drinking, the use of water filters, and proper sanitation practices. In conclusion, the use of untreated groundwater as a primary source of drinking water in East Hararghe, Oromia, Ethiopia, has significant impacts on human health, leading to widespread waterborne illnesses and posing a significant threat to public health. Sustained efforts are urgently needed to address the root causes of contamination, such as poor sanitation and hygiene practices, improper waste management, and the water sanitation supply problem, including the implementation of effective preventive measures and community-based education programs, ultimately improving public health outcomes in the region. A comprehensive approach that involves community-based water management systems, point-of-use water treatment methods, and awareness-raising campaigns can contribute to reducing the incidence of microbial contamination in the region.

Keywords: food safety, health risks, microbial contamination, untreated groundwater

Procedia PDF Downloads 70
74 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 125
73 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients

Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa

Abstract:

Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.

Keywords: autism, epigenetics, LSD1, personalized medicine

Procedia PDF Downloads 136
72 Residential Youth Care – Lessons Learned From A Cross-country Comparison Of Utilization Rates

Authors: Sigrid James

Abstract:

Purpose and Background: Despite a global policy push for deinstitutionalization, residential care for children and youth remains a relevant and highly utilized out-of-home care option in many countries, fulfilling functions of care and accommodation as well as education and treatment. While many youths are placed in residential care programs temporarily or during times of transition, some still spend years in programs that range from small group homes to large institutions. How residential care is used and what function it plays in child welfare systems is influenced by a range of factors. Among them are sociocultural and historical developments, available resources for child welfare, cultural notions about family, a lack of family-based placement alternatives as well as a belief that residential care can be beneficial to children. As part of a larger study that examined differences in residential care across 16 countries along a range of dimensions, this paper reports findings on utilization rates of residential care, i.e., the proportion of out out-of-home care dedicated to residential care relative to forms of family-based foster care. Method: Using an embedded multiple-case design study approach where a country represents a case, residential care in 16 countries was studied and compared. The comparison was focused on countries with developed social welfare systems and included Spain, Denmark, Germany, Ireland, the Netherlands, England, Scotland, Australia, Italy, Israel, Argentina, Portugal, Finland, France, the United States and Canada. Experts from each country systematically collected data on residential care based on a common matrix developed by the author. A range of sources were accessed depending on the information sought, including administrative data, government reports, research studies, etc. Utilization rates were mostly drawn from administrative data or government reports. While denominators may slightly differ, available data allowed for meaningful comparisons. Beyond descriptive data on utilization rates, analysis allowed to also capture trends in utilization (increasing, decreasing, stable) as well as the rate change. Results: Results indicate high variability in the utilization of residential care, covering the entire spectrum from a low of 7% to a high of 97%, with most countries falling somewhere in between. Three utilization categories were identified: high-users of residential care (Portugal, Argentina and Israel), medium-users (Denmark, France, Italy, Finland, Spain, Netherlands, Germany), and low-users (England, Scotland, Ireland, Canada, Australia, the United States). A number of countries experienced drastic reductions in residential care during the past few years (e.g. US), while others have seen stable rates (e.g., Portugal) or even increasing rates (e.g., Spain). Conclusions: Multiple contextual factors have to be considered when interpreting findings. For instance, countries with low residential care rates have, in most cases, undergone recent legislative changes to drastically reduce residential care. In medium-utilization countries, residential care reforms seem to be primarily focused on improving standards and, thus, the quality of care. High utilization countries generally face serious obstacles to implementing alternative family-based forms of out-of-home care. Cultural acceptance of residential or foster care and notions of professionalism also appear to play an important role in explaining variability in utilization.

Keywords: residential youth care, child welfare, case study, cross-national comparative research

Procedia PDF Downloads 35
71 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel

Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon

Abstract:

The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.

Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program

Procedia PDF Downloads 48
70 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology

Authors: Ashley L. Freeman, Jessica D. Watkins

Abstract:

TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.

Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine

Procedia PDF Downloads 48
69 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces

Authors: Martin Alexander Eder, Sergei Semenov

Abstract:

Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.

Keywords: adhesive, fatigue, interface, multiaxial stress

Procedia PDF Downloads 140
68 Simulation, Design, and 3D Print of Novel Highly Integrated TEG Device with Improved Thermal Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 35
67 Effect of Black Cumin (Nigella sativa) Extract on Damaged Brain Cells

Authors: Batul Kagalwala

Abstract:

The nervous system is made up of complex delicate structures such as the spinal cord, peripheral nerves and the brain. These are prone to various types of injury ranging from neurodegenerative diseases to trauma leading to diseases like Parkinson's, Alzheimer's, multiple sclerosis, amyotrophic lateral sclerosis (ALS), multiple system atrophy etc. Unfortunately, because of the complicated structure of nervous system, spontaneous regeneration, repair and healing is seldom seen due to which brain damage, peripheral nerve damage and paralysis from spinal cord injury are often permanent and incapacitating. Hence, innovative and standardized approach is required for advance treatment of neurological injury. Nigella sativa (N. sativa), an annual flowering plant native to regions of southern Europe and Asia; has been suggested to have neuroprotective and anti-seizures properties. Neuroregeneration is found to occur in damaged cells when treated using extract of N. sativa. Due to its proven health benefits, lots of experiments are being conducted to extract all the benefits from the plant. The flowers are delicate and are usually pale blue and white in color with small black seeds. These seeds are the source of active components such as 30–40% fixed oils, 0.5–1.5% essential oils, pharmacologically active components containing thymoquinone (TQ), ditimoquinone (DTQ) and nigellin. In traditional medicine, this herb was identified to have healing properties and was extensively used Middle East and Far East for treating diseases such as head ache, back pain, asthma, infections, dysentery, hypertension, obesity and gastrointestinal problems. Literature studies have confirmed the extract of N. sativa seeds and TQ have inhibitory effects on inducible nitric oxide synthase and production of nitric oxide as well as anti-inflammatory and anticancer activities. Experimental investigation will be conducted to understand which ingredient of N. sativa causes neuroregeneration and roots to its healing property. An aqueous/ alcoholic extract of N. sativa will be made. Seed oil is also found to have used by researchers to prepare such extracts. For the alcoholic extracts, the seeds need to be powdered and soaked in alcohol for a period of time and the alcohol must be evaporated using rotary evaporator. For aqueous extracts, the powder must be dissolved in distilled water to obtain a pure extract. The mobile phase will be the extract while the suitable stationary phase (substance that is a good adsorbent e.g. silica gels, alumina, cellulose etc.) will be selected. Different ingredients of N. sativa will be separated using High Performance Liquid Chromatography (HPLC) for treating damaged cells. Damaged brain cells will be treated individually and in different combinations of 2 or 3 compounds for different intervals of time. The most suitable compound or a combination of compounds for the regeneration of cells will be determined using DOE methodology. Later the gene will also be determined and using Polymerase Chain Reaction (PCR) it will be replicated in a plasmid vector. This plasmid vector shall be inserted in the brain of the organism used and replicated within. The gene insertion can also be done by the gene gun method. The gene in question can be coated on a micro bullet of tungsten and bombarded in the area of interest and gene replication and coding shall be studied. Investigation on whether the gene replicates in the organism or not will be examined.

Keywords: black cumin, brain cells, damage, extract, neuroregeneration, PCR, plasmids, vectors

Procedia PDF Downloads 619
66 A Novel Paradigm in the Management of Pancreatic Trauma

Authors: E. Tan, O. McKay, T. Clarnette T., D. Croagh

Abstract:

Background: Historically with pancreatic trauma, complete disruption of the main pancreatic duct (MPD), classified as Grade IV-V by the American Association for the Surgery of Trauma (AAST), necessitated a damage-control laparotomy. This was to avoid mortality, shorten diet upgrade timeframe, and hence shorter length of stay. However, acute pancreatic resection entailed complications of pancreatic fistulas and leaks. With the advance of imaging-guided interventions, non-operative management such as percutaneous and transpapillary drainage of traumatic peripancreatic collections have been trialled favourably. The aim of this case series is to evaluate the efficacy of endoscopic ultrasound-guided (EUS) transmural drainage in managing traumatic peripancreatic collections as a less invasive alternative to traditional approaches. This study also highlights the importance of anatomical knowledge regarding peripancreatic collection’s common location in the lesser sac, the pancreas relationship to adjacent organs, and the formation of the main pancreatic duct in regards to the feasibility of therapeutic internal drainage. Methodology: A retrospective case series was conducted at a single tertiary endoscopy unit, analysing patient data over a 5-year period. Inclusion criteria outlined patients age 5 to 80-years-old, traumatic pancreatic injury of at least Grade IV and haemodynamic stability. Exclusion criteria involved previous episodes of pancreatitis or abdominal trauma. Patient demographics and clinicopathological characteristics were retrospectively collected. Results: The study identified 7 patients with traumatic pancreatic injuries that were managed from 2018-2022; age ranging from 5 to 34 years old, with majority being female (n=5). Majority of the mechanisms of trauma were a handlebar injury (n=4). Diagnosis was confirmed with an elevated lipase and computerized tomotography (CT) confirmation of proximal pancreatic transection with MPD disruption. All patients sustained an isolated single organ grade IV pancreatic injury, except case 4 and 5 with other intra-abdominal visceral Grade 1 injuries. 6 patients underwent early ERCP-guided transpapillary drainage with 1 being unsuccessful for pancreatic duct stent insertion (case 1) and 1 complication of stent migration (case 2). Surveillance imaging post ERCP showed the stents were unable to bridge the disrupted duct and development of symptomatic collections with an average size of 9.9cm. Hence, all patients proceeded to EUS-guided transmural drainage, with 2/7 patients requiring repeat drainages (case 6 and 7). Majority (n=6) had a cystogastrostomy, whilst 1 (case 6) had a cystoenterostomy due to feasibility of the peripancreatic collection being adjacent to duodenum rather than stomach. However, case 6 subsequently required repeat EUS-guided drainage with cystogastrostomy for ongoing collections. Hence all patients avoided initial laparotomy with an average index length of stay of 11.7 days. Successful transmural drainage was demonstrated, with no long-term complications of pancreatic insufficiency; except for 1 patient requiring a distal pancreatectomy at 2 year follow-up due to chronic pain. Conclusion: The early results of this series support EUS-guided transmural drainage as a viable management option for traumatic peripancreatic collections, showcasing successful outcomes, minimal complications, and long-term efficacy in avoiding surgical interventions. More studies are required before the adoption of this procedure as a less invasive and complication-prone management approach for traumatic peripancreatic collections.

Keywords: endoscopic ultrasound, cystogastrostomy, pancreatic trauma, traumatic peripancreatic collection, transmural drainage

Procedia PDF Downloads 15