Search results for: unsupervised techniques;
329 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 337328 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 70327 Accelerating Personalization Using Digital Tools to Drive Circular Fashion
Authors: Shamini Dhana, G. Subrahmanya VRK Rao
Abstract:
The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.Keywords: circular fashion, deep learning, digital technology platform, personalization
Procedia PDF Downloads 64326 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins Performance: North Gaza Emergency Sewage Treatment Plant as Case Study
Authors: Sadi Ali, Yaser Kishawi
Abstract:
As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.Keywords: SAT, wastewater quality, soil remediation, North Gaza
Procedia PDF Downloads 234325 Reliability and Availability Analysis of Satellite Data Reception System using Reliability Modeling
Authors: Ch. Sridevi, S. P. Shailender Kumar, B. Gurudayal, A. Chalapathi Rao, K. Koteswara Rao, P. Srinivasulu
Abstract:
System reliability and system availability evaluation plays a crucial role in ensuring the seamless operation of complex satellite data reception system with consistent performance for longer periods. This paper presents a novel approach for the same using a case study on one of the antenna systems at satellite data reception ground station in India. The methodology involves analyzing system's components, their failure rates, system's architecture, generation of logical reliability block diagram model and estimating the reliability of the system using the component level mean time between failures considering exponential distribution to derive a baseline estimate of the system's reliability. The model is then validated with collected system level field failure data from the operational satellite data reception systems that includes failure occurred, failure time, criticality of the failure and repair times by using statistical techniques like median rank, regression and Weibull analysis to extract meaningful insights regarding failure patterns and practical reliability of the system and to assess the accuracy of the developed reliability model. The study mainly focused on identification of critical units within the system, which are prone to failures and have a significant impact on overall performance and brought out a reliability model of the identified critical unit. This model takes into account the interdependencies among system components and their impact on overall system reliability and provides valuable insights into the performance of the system to understand the Improvement or degradation of the system over a period of time and will be the vital input to arrive at the optimized design for future development. It also provides a plug and play framework to understand the effect on performance of the system in case of any up gradations or new designs of the unit. It helps in effective planning and formulating contingency plans to address potential system failures, ensuring the continuity of operations. Furthermore, to instill confidence in system users, the duration for which the system can operate continuously with the desired level of 3 sigma reliability was estimated that turned out to be a vital input to maintenance plan. System availability and station availability was also assessed by considering scenarios of clash and non-clash to determine the overall system performance and potential bottlenecks. Overall, this paper establishes a comprehensive methodology for reliability and availability analysis of complex satellite data reception systems. The results derived from this approach facilitate effective planning contingency measures, and provide users with confidence in system performance and enables decision-makers to make informed choices about system maintenance, upgrades and replacements. It also aids in identifying critical units and assessing system availability in various scenarios and helps in minimizing downtime and optimizing resource allocation.Keywords: exponential distribution, reliability modeling, reliability block diagram, satellite data reception system, system availability, weibull analysis
Procedia PDF Downloads 84324 The Practical Application of Sensory Awareness in Developing Healthy Communication, Emotional Regulation, and Emotional Introspection
Authors: Node Smith
Abstract:
Developmental psychology has long focused on modeling consciousness, often neglecting practical application and clinical utility. This paper aims to bridge this gap by exploring the practical application of physical and sensory tracking and awareness in fostering essential skills for conscious development. Higher conscious development requires practical skills such as self-agency, the ability to hold multiple perspectives, and genuine altruism. These are not personality characteristics but areas of skillfulness that address many cultural deficiencies impacting our world. They are intertwined with individual as well as collective conscious development. Physical, sensory tracking and awareness are crucial for developing these skills and offer the added benefit of cultivating healthy communication, emotional regulation, and introspection. Unlike skills such as throwing a baseball, which can be developed through practice or innate ability, the ability to introspect, track physical sensations, and observe oneself objectively is essential for advancing consciousness. Lacking these skills leads to cultural and individual anxiety, helplessness, and a lack of agency, manifesting as blame-shifting and irresponsibility. The inability to hold multiple perspectives stifles altruism, as genuine consideration for a global community requires accepting other perspectives without conditions. Physical and sensory tracking enhances self-awareness by grounding individuals in their bodily experiences. This grounding is critical for emotional regulation, allowing individuals to identify and process emotions in real-time, preventing overwhelm and fostering balance. Techniques like mindfulness meditation and body scan exercises attune individuals to their physical sensations, providing insights into their emotional states. Sensory awareness also facilitates healthy communication by fostering empathy and active listening. When individuals are in tune with their physical sensations, they become more present in interactions, picking up on subtle cues and responding thoughtfully. This presence reduces misunderstandings and conflicts, promoting more effective communication. The ability to introspect and observe oneself objectively is key to emotional introspection. This skill allows individuals to reflect on their thoughts, feelings, and behaviors, identify patterns, recognize areas for growth, and make conscious choices aligned with their values and goals. In conclusion, physical and sensory tracking and awareness are vital for developing the skills necessary for higher consciousness development. By fostering self-agency, emotional regulation, and the ability to hold multiple perspectives, these practices contribute to healthier communication, deeper emotional introspection, and a more altruistic and connected global community. Integrating these practices into developmental psychology and therapeutic interventions holds significant promise for both individual and societal transformation.Keywords: conscious development, emotional introspection, emotional regulation, self-agency, stages of development
Procedia PDF Downloads 44323 Global Service-Learning: Lessons Learned from Teacher Candidates
Authors: Miranda Lin
Abstract:
This project examined the impact of a globally focused service-learning project implemented in a multicultural education course in a Midwestern university. This project facilitated critical self-reflection and build cross-cultural competence while nurturing a partnership with two schools that serve students with disabilities in Vietnam. Through a service-learning project, pre-service teachers connected via Skype with the principals/teachers at schools in Vietnam to identify and subsequently develop needed instructional materials for students with mild, moderate, and severe disabilities. Qualitative data sources include students’ intercultural competence self-reflection survey (pre-test and post-test), reflections, discussions, service project, and lesson plans. Literature Review- Global service-learning is a teaching strategy that encompasses service experiences both in the local community and abroad. Drawing on elements of global learning and international service-learning, global service-learning experiences are guided by a framework that is designed to support global learning outcomes and involve direct engagement with difference. By engaging in real-world challenges, global service-learning experiences can support the achievement of learning outcomes such as civic. Knowledge and intercultural knowledge and competence. Intercultural competence development is considered essential for cooperative and reciprocal engagement with community partners.Method- Participants (n=27*) were mostly elementary and early childhood pre-service teachers who were enrolled in a multicultural education course. All but one was female. Among the pre-service teachers, one Asian American, two Latinas, and the rest were White. Two pre-service teachers identified themselves as from the low socioeconomic families and the rest were from the middle to upper middle class.The global service-learning project was implemented in the spring of 2018. Two Vietnamese schools that served students with disabilities agreed to be the global service-learning sites. Both schools were located in an urban city.Systematic collection of data coincided with the course schedule as follows: an initial intercultural competence self-reflection survey completed in week one, guided reflections submitted in week 1, 9, and 16, written lesson plans and supporting materials for the service project submitted in week 16, and a final intercultural competence self-reflection survey completed in week 16. Significance-This global service-learning project has helped participants meet Merryfield’s goals in various degrees. They 1) learned knowledge and skills in the basics of instructional planning, 2) used a variety of instructional methods that encourage active learning, meet the different learning styles of students, and are congruent with content and educational goals, 3) gained the awareness and support of their students as individuals and as learners, 4) developed questioning techniques that build higher-level thinking skills, and 5) made progress in critically reflecting on and improving their own teaching and learning as a professional educator as a result of this project.Keywords: global service-learning, teacher education, intercultural competence, diversity
Procedia PDF Downloads 117322 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 79321 Giving Children with Osteogenesis Imperfecta a Voice: Overview of a Participatory Approach for the Development of an Interactive Communication Tool
Authors: M. Siedlikowski, F. Rauch, A. Tsimicalis
Abstract:
Osteogenesis Imperfecta (OI) is a genetic disorder of childhood onset that causes frequent fractures after minimal physical stress. To date, OI research has focused on medically- and surgically-oriented outcomes with little attention on the perspective of the affected child. It is a challenge to elicit the child’s voice in health care, in other words, their own perspective on their symptoms, but software development offers a way forward. Sisom (Norwegian acronym derived from ‘Si det som det er’ meaning ‘Tell it as it is’) is an award-winning, rigorously tested, interactive, computerized tool that helps children with chronic illnesses express their symptoms to their clinicians. The successful Sisom software tool, that addresses the child directly, has not yet been adapted to attend to symptoms unique to children with OI. The purpose of this study was to develop a Sisom paper prototype for children with OI by seeking the perspectives of end users, particularly, children with OI and clinicians. Our descriptive qualitative study was conducted at Shriners Hospitals for Children® – Canada, which follows the largest cohort of children with OI in North America. Purposive sampling was used to recruit 12 children with OI over three cycles. Nine clinicians oversaw the development process, which involved determining the relevance of current Sisom symptoms, vignettes, and avatars, as well as generating new Sisom OI components. Data, including field notes, transcribed audio-recordings, and drawings, were deductively analyzed using content analysis techniques. Guided by the following framework, data pertaining to symptoms, vignettes, and avatars were coded into five categories: a) Relevant; b) Irrelevant; c) To modify; d) To add; e) Unsure. Overall, 70.8% of Sisom symptoms were deemed relevant for inclusion, with 49.4% directly incorporated, and 21.3% incorporated with changes to syntax, and/or vignette, and/or location. Three additions were made to the ‘Avatar’ island. This allowed children to celebrate their uniqueness: ‘Makes you feel like you’re not like everybody else.’ One new island, ‘About Me’, was added to capture children’s worldviews. One new sub-island, ‘Getting Around’, was added to reflect accessibility issues. These issues were related to the children’s independence, their social lives, as well as the perceptions of others. In being consulted as experts throughout the co-creation of the Sisom OI paper prototype, children coded the Sisom symptoms and provided sound rationales for their chosen codes. In rationalizing their codes, all children shared personal stories about themselves and their relationships, insights about their OI, and an understanding of the strengths and challenges they experience on a day-to-day basis. The child’s perspective on their health is a basic right, and allowing it to be heard is the next frontier in the care of children with genetic diseases. Sisom OI, a methodological breakthrough within OI research, will offer clinicians an innovative and child-centered approach to capture this neglected perspective. It will provide a tool for the delivery of health care in the center that established the worldwide standard of care for children with OI.Keywords: child health, interactive computerized communication tool, participatory approach, symptom management
Procedia PDF Downloads 157320 Selfie: Redefining Culture of Narcissism
Authors: Junali Deka
Abstract:
“Pictures speak more than a thousand words”. It is the power of image which can have multiple meanings the way it is read by the viewers. This research article is an outcome of the extensive study of the phenomenon of‘selfie culture’ and dire need of self-constructed virtual identity among youths. In the recent times, there has been a revolutionary change in the concept of photography in terms of both techniques and applications. The popularity of ‘self-portraits’ mainly depend on the temporal space and time created on social networking sites like Facebook, Instagram. With reference to Stuart’s Hall encoding and decoding process, the article studies the behavior of the users who post photographs online. The photographic messages (Roland Barthes) are interpreted differently by different viewers. The notion of ‘self’, ‘self-love and practice of looking (Marita Sturken) and ways of seeing (John Berger) got new definition and dimensional together. After Oscars Night, show host Ellen DeGeneres’s selfie created the most buzz and hype in the social media. The term was judged the word of 2013, and has earned its place in the dictionary. “In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time an Australian origin. The present study was carried to understand the concept of ‘selfie-bug’ and the phenomenon it has created among youth (especially students) at large in developing a pseudo-image of its own. The topic was relevant and gave a platform to discuss about the cultural, psychological and sociological implications of selfie in the age of digital technology. At the first level, content analysis of the primary and secondary sources including newspapers articles and online resources was carried out followed by a small online survey conducted with the help of questionnaire to find out the student’s view on selfie and its social and psychological effects. The newspapers reports and online resources confirmed that selfie is a new trend in the digital media and it has redefined the notion of beauty and self-love. The Facebook and Instagram are the major platforms used to express one-self and creation of virtual identity. The findings clearly reflected the active participation of female students in comparison to male students. The study of the photographs of few selected respondents revealed the difference of attitude and image building among male and female users. The study underlines some basic questions about the desire of reconstruction of identity among young generation, such as - are they becoming culturally narcissist; responsible factors for cultural, social and moral changes in the society, psychological and technological effects caused by Smartphone as well, culminating into a big question mark whether the selfie is a social signifier of identity construction.Keywords: Culture, Narcissist, Photographs, Selfie
Procedia PDF Downloads 407319 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 292318 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach
Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh
Abstract:
Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling
Procedia PDF Downloads 41317 A Lung Cancer Patients with Septic Shock Nursing Experience
Authors: Syue-Wen Lin
Abstract:
Objective: This article explores the nursing experience of an 84-year-old male lung cancer patient who underwent a thoracoscopic right lower lobectomy and treatment. The patient has multiple medical histories, including hypertension and diabetes. The nursing process involved cancer treatment, postoperative pain management, as well as wound care and healing. Methods: The nursing period is from February 10 to February 17, 2024. During the nursing process, pain management strategies are implemented, including morphine drugs and non-drug methods, and music therapy, essential oil massage, and extended reception time are used to make patients feel physically and mentally comfortable so as to reduce postoperative pain and encourage active participation in rehabilitation. Strict sterile wound dressing procedures and advanced wound care techniques are used to promote wound healing and prevent infection. Due to septic shock, dialysis is used to relieve worsening symptoms. Taking into account the patient's cancer status, the nursing team provides comprehensive cancer care based on the patient's physical and psychological needs. Given the complexity of the patient's condition, including advanced cancer, palliative care is also incorporated throughout the care process to relieve discomfort and provide psychological support. Results: Through comprehensive health assessment, the nursing team fully understood the patient's condition and developed a personalized care plan based on the patient's condition. The interprofessional critical care team provides respiratory therapy and lung expansion exercises to reduce muscle loss while addressing the patient's psychological status, pain management, and vital sign stabilization needs, resulting in a comprehensive approach to care. Lung expansion exercises and the use of a high-frequency chest wall oscillation vest successfully improved sputum drainage and facilitated weaning from mechanical ventilation. In addition, helping patients stabilize their vital signs and the integration of cancer care, pain management, wound care and palliative care helps the patient be fully supported throughout the recovery process, ultimately improving his quality of life. Conclusion: Lung cancer and septic shock present significant challenges to patients, and the nursing team not only provides critical care but also addresses the unique needs of patients through comprehensive infection control, cancer care, pain management, wound care, and palliative care interventions. These measures effectively improve patients' quality of life, promote recovery, and provide compassionate palliative care for terminally ill patients. Nursing staff work closely with family members to develop a comprehensive care plan to ensure that patients receive high-quality medical care as well as psychological support and a comfortable recovery environment.Keywords: septic shock, lung cancer, palliative care, nursing experience
Procedia PDF Downloads 22316 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 271315 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal
Authors: Pedro B. Antunes, Paulo J. Ramísio
Abstract:
Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.Keywords: coastal zones, monitoring, road runoff pollution, salt deposition
Procedia PDF Downloads 239314 Performance Evaluation of Various Displaced Left Turn Intersection Designs
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model
Procedia PDF Downloads 310313 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins of North Gaza Emergency Sewage Treatment Plant as Case Study
Authors: Sadi Ali, Yaser Kishawi
Abstract:
As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l, and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, North Gaza
Procedia PDF Downloads 247312 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton
Authors: Bing Chen, Xiang Ni, Eric Li
Abstract:
With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton
Procedia PDF Downloads 107311 Role of HIV-Support Groups in Mitigating Adverse Sexual Health Outcomes among HIV Positive Adolescents in Uganda
Authors: Lilian Nantume Wampande
Abstract:
Group-based strategies in the delivery of HIV care have opened up new avenues not only for meaningful participation for HIV positive people but also platforms for deconstruction and reconstruction of knowledge about living with the virus. Yet the contributions of such strategies among patients who live in high risk areas are still not explored. This case study research assessed the impact of HIV support networks on sexual health outcomes of HIV positive out-of-school adolescents residing in fishing islands of Kalangala in Uganda. The study population was out-of-school adolescents living with HIV and their sexual partners (n=269), members of their households (n=80) and their health service providers (n=15). Data were collected via structured interviews, observations and focus group discussions between August 2016 and March 2017. Data was then analyzed inductively to extract key themes related to the approaches and outcomes of the groups’ activities. The study findings indicate that support groups unite HIV positive adolescents in a bid for social renegotiation to achieve change but individual constraints surpass the groups’ intentions. Some adolescents for example reported increased fear which led to failure to cope, sexual violence, self-harm and denial of status as a result of the high expectations placed on them as members of the support groups. Further investigations around this phenomenon show that HIV networks play a monotonous role as information sources for HIV positive out-of-school adolescents which limit their creativity to seek information elsewhere. Results still indicate that HIV adolescent groups recognize the complexity of long-term treatment and stay in care leading to improved immunity for the majority yet; there is still scattered evidence about how effective they are among adolescents at different phases in the disease trajectory. Nevertheless, the primary focus of developing adolescent self-efficacy and coping skills significantly address a range of disclosure difficulties and supports autonomy. Moreover, the peer techniques utilized in addition to the almost homogeneous group characteristics accelerates positive confidence, hope and belongingness. Adolescent HIV-support groups therefore have the capacity to both improve and/or worsen sexual health outcomes for a young adolescent who is out-of-school. Communication interventions that seek to increase awareness about ‘self’ should therefore be emphasized more than just fostering collective action. Such interventions should be sensitive to context and gender. In addition, facilitative support supervision done by close and trusted health care providers, most preferably Village Health Teams (who are often community elected volunteers) would help to follow-up, mentor, encourage and advise this young adolescent in matters involving sexuality and health outcomes. HIV/AIDS prevention programs have extended their efforts beyond individual focus to those that foster collective action, but programs should rekindle interpersonal level strategies to address the complexity of individual behavior.Keywords: adolescent, HIV, support groups, Uganda
Procedia PDF Downloads 142310 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 195309 Adolescents’ Reports of Dating Abuse: Mothers’ Responses
Authors: Beverly Black
Abstract:
Background: Adolescent dating abuse (ADA) is widespread throughout the world and negatively impacts many adolescents. ADA is associated with lower self-esteem, poorer school performance, lower employment opportunities, higher rates of depression, absenteeism from school, substance abuse, bullying, smoking, suicide, pregnancy, eating disorders, and risky sexual behaviors, and experiencing domestic violence later in life. ADA prevention is sometimes addressed through school programming; yet, parental responses to ADA can also be an important vehicle for its prevention. In this exploratory study, the author examined how mothers, including abused mothers, responded to scenarios of ADA involving their children. Methods: Six focus groups were conducted between December, 2013 and June, 2014 with mothers (n=31) in the southern part of the United States. Three of the focus groups were comprised of mothers (n=17) who had been abused by their partners. Mothers were recruited from local community family agencies. Participants were provided a series of four scenarios about ADA and they were asked to explain how they would respond. Focus groups lasted approximately 45 minutes. All participants were given a gift card to a major retailer as a ‘thank you’. Using QSR-N10, two researchers’ analyzed the focus group data first using open and axial coding techniques to find overarching themes. Researchers triangulated the coded data to ensure accurate interpretations of the participants’ messages and used the scenario questions to structure the coded results. Results: Almost 30% of 699 comments coded as mothers’ recommendations for responding to ADA focused on the importance of providing advice to their children. Advice included breaking up, going to police, ignoring or avoiding the abusive partner, and setting boundaries in relationships. About 22% of comments focused on the need for educating teens about healthy and unhealthy relationships and seeking additional information. About 13% of the comments reflected the view that parents should confront abuser and/or abusers’ parents, and less than 2% noted the need to take their child to counseling. Mothers who had been abused offered similar responses as parents who had not experienced abuse. However, their responses were more likely to focus on sharing their own experience exercising caution in their responses, as they knew from their own experiences that authoritarian responses were ineffective. Over half of the comments indicated that parents would react stronger, quicker, and angrier if a girl was being abused by a boy than vice versa; parents expressed greater fear for their daughters than their sons involved in ADA. Conclusions. Results suggest that mothers have ideas about how to respond to ADA. Mothers who have been abused draw from their experiences and are aware that responding in an authoritarian manner may not be helpful. Because parental influence on teens is critical in their development, it is important for all parents to respond to ADA in a helpful manner to break the cycle of violence. Understanding responses to ADA can inform prevention programming to work with parents in responding to ADA.Keywords: abused mothers' responses to dating abuse, adolescent dating abuse, mothers' responses to dating abuse, teen dating violence
Procedia PDF Downloads 218308 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz
Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto
Abstract:
Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.Keywords: characterization, dissolution, Efavirenz, solid dispersions
Procedia PDF Downloads 631307 Reduction of Specific Energy Consumption in Microfiltration of Bacillus velezensis Broth by Air Sparging and Turbulence Promoter
Authors: Jovana Grahovac, Ivana Pajcin, Natasa Lukic, Jelena Dodic, Aleksandar Jokic
Abstract:
To obtain purified biomass to be used in the plant pathogen biocontrol or as soil biofertilizer, it is necessary to eliminate residual broth components at the end of the fermentation process. The main drawback of membrane separation techniques is permeate flux decline due to the membrane fouling. Fouling mitigation measures increase the pressure drop along membrane channel due to the increased resistance to flow of the feed suspension, thus increasing the hydraulic power drop. At the same time, these measures lead to an increase in the permeate flux due to the reduced resistance of the filtration cake on the membrane surface. Because of these opposing effects, the energy efficiency of fouling mitigation measures is limited, and the justification of its application is provided by information on a reducing specific energy consumption compared to a case without any measures employed. In this study, the influence of static mixer (Kenics) and air-sparging (two-phase flow) on reduction of specific energy consumption (ER) was investigated. Cultivation Bacillus velezensis was carried out in the 3-L bioreactor (Biostat® Aplus) containing 2 L working volume with two parallel Rushton turbines and without internal baffles. Cultivation was carried out at 28 °C on at 150 rpm with an aeration rate of 0.75 vvm during 96 h. The experiments were carried out in a conventional cross-flow microfiltration unit. During experiments, permeate and retentate were recycled back to the broth vessel to simulate continuous process. The single channel ceramic membrane (TAMI Deutschland) used had a nominal pore size 200 nm with the length of 250 mm and an inner/external diameter of 6/10 mm. The useful membrane channel surface was 4.33×10⁻³ m². Air sparging was brought by the pressurized air connected by a three-way valve to the feed tube by a simple T-connector without diffusor. The different approaches to flux improvement are compared in terms of energy consumption. Reduction of specific energy consumption compared to microfiltration without fouling mitigation is around 49% and 63%, for use of two-phase flow and a static mixer, respectively. In the case of a combination of these two fouling mitigation methods, ER is 60%, i.e., slightly lower compared to the use of turbulence promoter alone. The reason for this result can be found in the fact that flux increase is more affected by the presence of a Kenics static mixer while sparging results in an increase of energy used during microfiltration. By comparing combined method with turbulence promoter flux enhancement method ER is negative (-7%) which can be explained by increased power consumption for air flow with moderate contribution to the flux increase. Another confirmation for this fact can be found by comparing energy consumption values for combined method with energy consumption in the case of two-phase flow. In this instance energy reduction (ER) is 22% that demonstrates that turbulence promoter is more efficient compared to two phase flow. Antimicrobial activity of Bacillus velezensis biomass against phytopathogenic isolates Xanthomonas campestris was preserved under different fouling reduction methods.Keywords: Bacillus velezensis, microfiltration, static mixer, two-phase flow
Procedia PDF Downloads 118306 Radiofrequency and Near-Infrared Responsive Core-Shell Multifunctional Nanostructures Using Lipid Templates for Cancer Theranostics
Authors: Animesh Pan, Geoffrey D. Bothun
Abstract:
With the development of nanotechnology, research in multifunctional delivery systems has a new pace and dimension. An incipient challenge is to design an all-in-one delivery system that can be used for multiple purposes, including tumor targeting therapy, radio-frequency (RF-), near-infrared (NIR-), light-, or pH-induced controlled release, photothermal therapy (PTT), photodynamic therapy (PDT), and medical diagnosis. In this regard, various inorganic nanoparticles (NPs) are known to show great potential as the 'functional components' because of their fascinating and tunable physicochemical properties and the possibility of multiple theranostic modalities from individual NPs. Magnetic, luminescent, and plasmonic properties are the three most extensively studied and, more importantly biomedically exploitable properties of inorganic NPs. Although successful attempts of combining any two of them above mentioned functionalities have been made, integrating them in one system has remained challenge. Keeping those in mind, controlled designs of complex colloidal nanoparticle system are one of the most significant challenges in nanoscience and nanotechnology. Therefore, systematic and planned studies providing better revelation are demanded. We report a multifunctional delivery platform-based liposome loaded with drug, iron-oxide magnetic nanoparticles (MNPs), and a gold shell on the surface of liposomes, were synthesized using a lipid with polyelectrolyte (layersomes) templating technique. MNPs and the anti-cancer drug doxorubicin (DOX) were co-encapsulated inside liposomes composed by zwitterionic phophatidylcholine and anionic phosphatidylglycerol using reverse phase evaporation (REV) method. The liposomes were coated with positively charge polyelectrolyte (poly-L-lysine) to enrich the interface with gold anion, exposed to a reducing agent to form a gold nanoshell, and then capped with thio-terminated polyethylene glycol (SH-PEG2000). The core-shell nanostructures were characterized by different techniques like; UV-Vis/NIR scanning spectrophotometer, dynamic light scattering (DLS), transmission electron microscope (TEM). This multifunctional system achieves a variety of functions, such as radiofrequency (RF)-triggered release, chemo-hyperthermia, and NIR laser-triggered for photothermal therapy. Herein, we highlight some of the remaining major design challenges in combination with preliminary studies assessing therapeutic objectives. We demonstrate an efficient loading and delivery system to significant cell death of human cancer cells (A549) with therapeutic capabilities. Coupled with RF and NIR excitation to the doxorubicin-loaded core-shell nanostructure helped in securing targeted and controlled drug release to the cancer cells. The present core-shell multifunctional system with their multimodal imaging and therapeutic capabilities would be eminent candidates for cancer theranostics.Keywords: cancer thernostics, multifunctional nanostructure, photothermal therapy, radiofrequency targeting
Procedia PDF Downloads 128305 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics
Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki
Abstract:
The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio
Procedia PDF Downloads 174304 Child Sexual Abuse Prevention: Evaluation of the Program “Sharing Mouth to Mouth: My Body, Nobody Can Touch It”
Authors: Faride Peña, Teresita Castillo, Concepción Campo
Abstract:
Sexual violence, and particularly child sexual abuse, is a serious problem all over the world, México included. Given its importance, there are several preventive and care programs done by the government and the civil society all over the country but most of them are developed in urban areas even though these problems are especially serious in rural areas. Yucatán, a state in southern México, occupies one of the first places in child sexual abuse. Considering the above, the University Unit of Clinical Research and Victimological Attention (UNIVICT) of the Autonomous University of Yucatan, designed, implemented and is currently evaluating the program named “Sharing Mouth to Mouth: My Body, Nobody Can Touch It”, a program to prevent child sexual abuse in rural communities of Yucatán, México. Its aim was to develop skills for the detection of risk situations, providing protection strategies and mechanisms for prevention through culturally relevant psycho-educative strategies to increase personal resources in children, in collaboration with parents, teachers, police and municipal authorities. The diagnosis identified that a particularly vulnerable population were children between 4 and 10 years. The program run during 2015 in primary schools in the municipality whose inhabitants are mostly Mayan. The aim of this paper is to present its evaluation in terms of its effectiveness and efficiency. This evaluation included documental analysis of the work done in the field, psycho-educational and recreational activities with children, evaluation of knowledge by participating children and interviews with parents and teachers. The results show high efficiency in fulfilling the tasks and achieving primary objectives. The efficiency shows satisfactory results but also opportunity areas that can be resolved with minor adjustments to the program. The results also show the importance of including culturally relevant strategies and activities otherwise it minimizes possible achievements. Another highlight is the importance of participatory action research in preventive approaches to child sexual abuse since by becoming aware of the importance of the subject people participate more actively; in addition to design culturally appropriate strategies and measures so that the proposal may not be distant to the people. Discussion emphasizes the methodological implications of prevention programs (convenience of using participatory action research (PAR), importance of monitoring and mediation during implementation, developing detection skills tools in creative ways using psycho-educational interactive techniques and working assessment issued by the participants themselves). As well, it is important to consider the holistic character this type of program should have, in terms of incorporating social and culturally relevant characteristics, according to the community individuality and uniqueness, consider type of communication to be used and children’ language skills considering that there should be variations strongly linked to a specific cultural context.Keywords: child sexual abuse, evaluation, PAR, prevention
Procedia PDF Downloads 295303 Polymer Composites Containing Gold Nanoparticles for Biomedical Use
Authors: Bozena Tyliszczak, Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Agnieszka Sobczak-Kupiec
Abstract:
Introduction: Nanomaterials become one of the leading materials in the synthesis of various compounds. This is a reason for the fact that nano-size materials exhibit other properties compared to their macroscopic equivalents. Such a change in size is reflected in a change in optical, electric or mechanical properties. Among nanomaterials, particular attention is currently directed into gold nanoparticles. They find application in a wide range of areas including cosmetology or pharmacy. Additionally, nanogold may be a component of modern wound dressings, which antibacterial activity is beneficial in the viewpoint of the wound healing process. Specific properties of this type of nanomaterials result in the fact that they may also be applied in cancer treatment. Studies on the development of new techniques of the delivery of drugs are currently an important research subject of many scientists. This is due to the fact that along with the development of such fields of science as medicine or pharmacy, the need for better and more effective methods of administering drugs is constantly growing. The solution may be the use of drug carriers. These are materials that combine with the active substance and lead it directly to the desired place. A role of such a carrier may be played by gold nanoparticles that are able to covalently bond with many organic substances. This allows the combination of nanoparticles with active substances. Therefore gold nanoparticles are widely used in the preparation of nanocomposites that may be used for medical purposes with special emphasis on drug delivery. Methodology: As part of the presented research, synthesis of composites was carried out. The mentioned composites consisted of the polymer matrix and gold nanoparticles that were introduced into the polymer network. The synthesis was conducted with the use of a crosslinking agent, and photoinitiator and the materials were obtained by means of the photopolymerization process. Next, incubation studies were conducted using selected liquids that simulated fluids are occurring in the human body. The study allows determining the biocompatibility of the tested composites in relation to selected environments. Next, the chemical structure of the composites was characterized as well as their sorption properties. Conclusions: Conducted research allowed for the preliminary characterization of prepared polymer composites containing gold nanoparticles in the viewpoint of their application for biomedical use. Tested materials were characterized by biocompatibility in tested environments. What is more, synthesized composites exhibited relatively high swelling capacity that is essential in the viewpoint of their potential application as drug carriers. During such an application, composite swells and at the same time releases from its interior introduced active substance; therefore, it is important to check the swelling ability of such material. Acknowledgements: The authors would like to thank The National Science Centre (Grant no: UMO - 2016/21/D/ST8/01697) for providing financial support to this project. This paper is based upon work from COST Action (CA18113), supported by COST (European Cooperation in Science and Technology).Keywords: nanocomposites, gold nanoparticles, drug carriers, swelling properties
Procedia PDF Downloads 116302 The Efficacy of Government Strategies to Control COVID 19: Evidence from 22 High Covid Fatality Rated Countries
Authors: Imalka Wasana Rathnayaka, Rasheda Khanam, Mohammad Mafizur Rahman
Abstract:
TheCOVID-19 pandemic has created unprecedented challenges to both the health and economic states in countries around the world. This study aims to evaluate the effectiveness of governments' decisions to mitigate the risks of COVID-19 through proposing policy directions to reduce its magnitude. The study is motivated by the ongoing coronavirus outbreaks and comprehensive policy responses taken by countries to mitigate the spread of COVID-19 and reduce death rates. This study contributes to filling the knowledge by exploiting the long-term efficacy of extensive plans of governments. This study employs a Panel autoregressive distributed lag (ARDL) framework. The panels incorporate both a significant number of variables and fortnightly observations from22 countries. The dependent variables adopted in this study are the fortnightly death rates and the rates of the spread of COVID-19. Mortality rate and the rate of infection data were computed based on the number of deaths and the number of new cases per 10000 people.The explanatory variables are fortnightly values of indexes taken to investigate the efficacy of government interventions to control COVID-19. Overall government response index, Stringency index, Containment and health index, and Economic support index were selected as explanatory variables. The study relies on the Oxford COVID-19 Government Measure Tracker (OxCGRT). According to the procedures of ARDL, the study employs (i) the unit root test to check stationarity, (ii) panel cointegration, and (iii) PMG and ARDL estimation techniques. The study shows that the COVID-19 pandemic forced immediate responses from policymakers across the world to mitigate the risks of COVID-19. Of the four types of government policy interventions: (i) Stringency and (ii) Economic Support have been most effective and reveal that facilitating Stringency and financial measures has resulted in a reduction in infection and fatality rates, while (iii) Government responses are positively associated with deaths but negatively with infected cases. Even though this positive relationship is unexpected to some extent in the long run, social distancing norms of the governments have been broken by the public in some countries, and population age demographics would be a possible reason for that result. (iv) Containment and healthcare improvements reduce death rates but increase the infection rates, although the effect has been lower (in absolute value). The model implies that implementation of containment health practices without association with tracing and individual-level quarantine does not work well. The policy implication based on containment health measures must be applied together with targeted, aggressive, and rapid containment to extensively reduce the number of people infected with COVID 19. Furthermore, the results demonstrate that economic support for income and debt relief has been the key to suppressing the rate of COVID-19 infections and fatality rates.Keywords: COVID-19, infection rate, deaths rate, government response, panel data
Procedia PDF Downloads 76301 Empirical Study on Causes of Project Delays
Authors: Khan Farhan Rafat, Riaz Ahmed
Abstract:
Renowned offshore organizations are drifting towards collaborative exertion to win and implement international projects for business gains. However, devoid of financial constraints, with the availability of skilled professionals, and despite improved project management practices through state-of-the-art tools and techniques, project delays have become a norm these days. This situation calls for exploring the factor(s) affecting the bonding between project management performance and project success. In the context of the well-known 3M’s of project management (that is, manpower, machinery, and materials), machinery and materials are dependent upon manpower. Because the body of knowledge inveterate on the influence of national culture on men, hence, the realization of the impact on the link between project management performance and project success need to be investigated in detail to arrive at the possible cause(s) of project delays. This research initiative was, therefore, undertaken to fill the research gap. The unit of analysis for the proposed research excretion was the individuals who had worked on skyscraper construction projects. In reverent studies, project management is best described using construction examples. It is due to this reason that the project oriented city of Dubai was chosen to reconnoiter on causes of project delays. A structured questionnaire survey was disseminated online with the courtesy of the Project Management Institute local chapter to carry out the cross-sectional study. The Construction Industry Institute, Austin, of the United States of America along with 23 high-rise builders in Dubai were also contacted by email requesting for their contribution to the study and providing them with the online link to the survey questionnaire. The reliability of the instrument was warranted using Cronbach’s alpha coefficient of 0.70. The appropriateness of sampling adequacy and homogeneity in variance was ensured by keeping Kaiser–Meyer–Olkin (KMO) and Bartlett’s test of sphericity in the range ≥ 0.60 and < 0.05, respectively. Factor analysis was used to verify construct validity. During exploratory factor analysis, all items were loaded using a threshold of 0.4. Four hundred and seventeen respondents, including members from top management, project managers, and project staff, contributed to the study. The link between project management performance and project success was significant at 0.01 level (2-tailed), and 0.05 level (2-tailed) for Pearson’s correlation. Before initiating the moderator analysis test for linearity, multicollinearity, outliers, leverage points and influential cases, test for homoscedasticity and normality were carried out which are prerequisites for conducting moderator review. The moderator analysis, using a macro named PROCESS, was performed to verify the hypothesis that national culture has an influence on the said link. The empirical findings, when compared with Hofstede's results, showed high power distance as the cause of construction project delays in Dubai. The research outcome calls for the project sponsors and top management to reshape their project management strategy and allow for low power distance between management and project personnel for timely completion of projects.Keywords: causes of construction project delays, construction industry, construction management, power distance
Procedia PDF Downloads 213300 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy
Authors: Lucia Liu, Matthew Koziol
Abstract:
Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation
Procedia PDF Downloads 54