Search results for: past morphologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3059

Search results for: past morphologies

89 Moderating and Mediating Effects of Business Model Innovation Barriers during Crises: A Structural Equation Model Tested on German Chemical Start-Ups

Authors: Sarah Mueller-Saegebrecht, André Brendler

Abstract:

Business model innovation (BMI) as an intentional change of an existing business model (BM) or the design of a new BM is essential to a firm's development in dynamic markets. The relevance of BMI is also evident in the ongoing COVID-19 pandemic, in which start-ups, in particular, are affected by limited access to resources. However, first studies also show that they react faster to the pandemic than established firms. A strategy to successfully handle such threatening dynamic changes represents BMI. Entrepreneurship literature shows how and when firms should utilize BMI in times of crisis and which barriers one can expect during the BMI process. Nevertheless, research merging BMI barriers and crises is still underexplored. Specifically, further knowledge about antecedents and the effect of moderators on the BMI process is necessary for advancing BMI research. The addressed research gap of this study is two-folded: First, foundations to the subject on how different crises impact BM change intention exist, yet their analysis lacks the inclusion of barriers. Especially, entrepreneurship literature lacks knowledge about the individual perception of BMI barriers, which is essential to predict managerial reactions. Moreover, internal BMI barriers have been the focal point of current research, while external BMI barriers remain virtually understudied. Second, to date, BMI research is based on qualitative methodologies. Thus, a lack of quantitative work can specify and confirm these qualitative findings. By focusing on the crisis context, this study contributes to BMI literature by offering a first quantitative attempt to embed BMI barriers into a structural equation model. It measures managers' perception of BMI development and implementation barriers in the BMI process, asking the following research question: How does a manager's perception of BMI barriers influence BMI development and implementation in times of crisis? Two distinct research streams in economic literature explain how individuals react when perceiving a threat. "Prospect Theory" claims that managers demonstrate risk-seeking tendencies when facing a potential loss, and opposing "Threat-Rigidity Theory" suggests that managers demonstrate risk-averse behavior when facing a potential loss. This study quantitively tests which theory can best predict managers' BM reaction to a perceived crisis. Out of three in-depth interviews in the German chemical industry, 60 past BMIs were identified. The participating start-up managers gave insights into their start-up's strategic and operational functioning. After, each interviewee described crises that had already affected their BM. The participants explained how they conducted BMI to overcome these crises, which development and implementation barriers they faced, and how severe they perceived them, assessed on a 5-point Likert scale. In contrast to current research, results reveal that a higher perceived threat level of a crisis harms BM experimentation. Managers seem to conduct less BMI in times of crisis, whereby BMI development barriers dampen this relation. The structural equation model unveils a mediating role of BMI implementation barriers on the link between the intention to change a BM and the concrete BMI implementation. In conclusion, this study confirms the threat-rigidity theory.

Keywords: barrier perception, business model innovation, business model innovation barriers, crises, prospect theory, start-ups, structural equation model, threat-rigidity theory

Procedia PDF Downloads 95
88 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom

Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer

Abstract:

Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.

Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater

Procedia PDF Downloads 67
87 The Shared Breath Project: Inhabiting Each Other’s Words and Being

Authors: Beverly Redman

Abstract:

With the Theatre Season of 2020-2021 cancelled due to COVID-19 at Purdue University, Fort Wayne, IN, USA, faculty directors found themselves scrambling to create theatre production opportunities for their students in the Department of Theatre. Redman, Chair of the Department, found her community to be suffering from anxieties brought on by a confluence of issues: the global-scale Covid-19 Pandemic, the United States’ Black Lives Matter protests erupting in cities all across the country and the coming Presidential election, arguably the most important and most contentious in the country’s history. Redman wanted to give her students the opportunity to speak not only on these issues but also to be able to record who they were at this time in their personal lives, as well as in this broad socio-political context. She also wanted to invite them into an experience of feeling empathy, too, at a time when empathy in this world seems to be sorely lacking. Returning to a mode of Devising Theatre she had used with community groups in the past, in which storytelling and re-enactment of participants’ life events combined with oral history documentation practices, Redman planned The Shared Breath Project. The process involved three months of workshops, in which participants alternated between theatre exercises and oral history collection and documentation activities as a way of generating original material for a theatre production. The goal of the first half of the project was for each participant to produce a solo piece in the form of a monologue after many generations of potential material born out of gammes, improvisations, interviews and the like. Along the way, many film and audio clips recorded the process of each person’s written documentation—documentation prepared by the subject him or herself but also by others in the group assigned to listen, watch and record. Then, in the second half of the project—and only once each participant had taken their own contributions from raw improvisatory self-presentations and through the stages of composition and performative polish, participants then exchanged their pieces. The second half of the project involved taking on each other’s words, mannerisms, gestures, melodic and rhythmic speech patterns and inhabiting them through the rehearsal process as their own, thus the title, The Shared Breath Project. Here, in stage two the acting challenges evolved to be those of capturing the other and becoming the other through accurate mimicry that embraces Denis Diderot’s concept of the Paradox of Acting, in that the actor is both seeming and being simultaneous. This paper shares the carefully documented process of making the live-streamed theatre production that resulted from these workshops, writing processes and rehearsals, and forming, The Shared Breath Project, which ultimately took the students’ Realist, life-based pieces and edited them into a single unified theatre production. The paper also utilizes research on the Paradox of Acting, putting a Post-Structuralist spin on Diderot’s theory. Here, the paper suggests the limitations of inhabiting the other by allowing that the other is always already a thing impenetrable but nevertheless worthy of unceasing empathetic, striving and delving in an epoch in which slow, careful attention to our fellows is in short supply.

Keywords: otherness, paradox of acting, oral history theatre, devised theatre, political theatre, community-based theatre, peoples’ theatre

Procedia PDF Downloads 185
86 Deconstructing Reintegration Services for Survivors of Human Trafficking: A Feminist Analysis of Australian and Thai Government and Non-Government Responses

Authors: Jessica J. Gillies

Abstract:

Awareness of the tragedy that is human trafficking has increased exponentially over the past two decades. The four pillars widely recognised as global solutions to the problem are prevention, prosecution, protection, and partnership between government and non-government organisations. While ‘sex-trafficking’ initially received major attention, this focus has shifted to other industries that conceal broader experiences of exploitation. However, within the regions of focus for this study, namely Australia and Thailand, trafficking for the purpose of sexual exploitation remains the commonly uncovered narrative of criminal justice investigations. In these regions anti-trafficking action is characterised by government-led prevention and prosecution efforts; whereas protection and reintegration practices have received criticism. Typically, non-government organisations straddle the critical chasm between policy and practice; therefore, they are perfectly positioned to contribute valuable experiential knowledge toward understanding how both sectors can support survivors in the post-trafficking experience. The aim of this research is to inform improved partnerships throughout government and non-government post-trafficking services by illuminating gaps in protection and reintegration initiatives. This research will explore government and non-government responses to human trafficking in Thailand and Australia, in order to understand how meaning is constructed in this context and how the construction of meaning effects survivors in the post-trafficking experience. A qualitative, three-stage methodology was adopted for this study. The initial stage of enquiry consisted of a discursive analysis, in order to deconstruct the broader discourses surrounding human trafficking. The data included empirical papers, grey literature such as publicly available government and non-government reports, and anti-trafficking policy documents. The second and third stages of enquiry will attempt to further explore the findings of the discourse analysis and will focus more specifically on protection and reintegration in Australia and Thailand. Stages two and three will incorporate process observations in government and non-government survivor support services, and semi-structured interviews with employees and volunteers within these settings. Two key findings emerged from the discursive analysis. The first exposed conflicting feminist arguments embedded throughout anti-trafficking discourse. Informed by conflicting feminist discourses on sex-work, a discursive relationship has been constructed between sex-industry policy and anti-trafficking policy. In response to this finding, data emerging from the process observations and semi-structured interviews will be interpreted using a feminist theoretical framework. The second finding progresses from the construction in the first. The discursive construction of sex-trafficking appears to have had influence over perceptions of the legitimacy of survivors, and therefore the support they receive in the post-trafficking experience. For example; women who willingly migrate for employment in the sex-industry, and on arrival are faced with exploitative conditions, are not perceived to be deserving of the same support as a woman who is not coerced, but rather physically forced, into such circumstances, yet both meet the criteria for a victim of human trafficking. The forthcoming study is intended to contribute toward building knowledge and understanding around the implications of the construction of legitimacy; and contextualise this in reference to government led protection and reintegration support services for survivors in the post-trafficking experience.

Keywords: Australia, government, human trafficking, non-government, reintegration, Thailand

Procedia PDF Downloads 112
85 The Effectiveness of Intervention Methods for Repetitive Behaviors in Preschool Children with Autism Spectrum Disorder: A Systematic Review

Authors: Akane Uda, Ami Tabata, Mi An, Misa Komaki, Ryotaro Ito, Mayumi Inoue, Takehiro Sasai, Yusuke Kusano, Toshihiro Kato

Abstract:

Early intervention is recommended for children with autism spectrum disorder (ASD), and an increasing number of children have received support and intervention before school age in recent years. In this study, we systematically reviewed preschool interventions focused on repetitive behaviors observed in children with ASD, which are often observed at younger ages. Inclusion criteria were as follows : (1) Child of preschool status (age ≤ 7 years) with a diagnosis of ASD (including autism, Asperger's, and pervasive developmental disorder) or a parent (caregiver) with a preschool child with ASD, (2) Physician-confirmed diagnosis of ASD (autism, Asperger's, and pervasive developmental disorder), (3) Interventional studies for repetitive behaviors, (4) Original articles published within the past 10 years (2012 or later), (5) Written in English and Japanese. Exclusion criteria were as follows: (1) Systematic reviews or meta-analyses, (2) Conference reports or books. We carefully scrutinized databases to remove duplicate references and used a two-step screening process to select papers. The primary screening included close scrutiny of titles and abstracts to exclude articles that did not meet the eligibility criteria. During the secondary screening, we carefully read the complete text to assess eligibility, which was double-checked by six members at the laboratory. Disagreements were resolved through consensus-based discussion. Our search yielded 304 papers, of which nine were included in the study. The level of evidence was as follows: three randomized controlled trials (level 2), four pre-post studies (level 4b), and two case reports (level 5). Seven articles selected for this study described the effectiveness of interventions. Interventions for repetitive behaviors in preschool children with ASD were categorized as five interventions that directly involved the child and four educational programs for caregivers and parents. Studies that directly intervened with children used early intensive intervention based on applied behavior analysis (Early Start Denver Model, Early Intensive Behavioral Intervention, and the Picture Exchange Communication System) and individualized education based on sensory integration. Educational interventions for caregivers included two methods; (a) education regarding combined methods and practices of applied behavior analysis in addition to classification and coping methods for repetitive behaviors, and (b) education regarding evaluation methods and practices based on children’s developmental milestones in play. With regard to the neurophysiological basis of repetitive behaviors, environmental factors are implicated as possible contributors. We assumed that applied behavior analysis was shown to be effective in reducing repetitive behaviors because analysis focused on the interaction between the individual and the environment. Additionally, with regard to educational interventions for caregivers, the intervention was shown to promote behavioral change in children based on the caregivers' understanding of the classification of repetitive behaviors and the children’s developmental milestones in play and adjustment of the person-environment context led to a reduction in repetitive behaviors.

Keywords: autism spectrum disorder, early intervention, repetitive behaviors, systematic review

Procedia PDF Downloads 141
84 International Broadcasting of Public Diplomacy in the Era of Social Media in Nigeria

Authors: Henry Okechukwu Onyeiwu

Abstract:

In today’s Nigerian digital age, the landscape of public diplomacy has been significantly altered by the rise of social media platforms like YouTube, Facebook, Twitter, and Instagram. In recent years, social media platforms have emerged as powerful tools for public diplomacy, transforming how countries communicate with both domestic and global audiences. International broadcasting as a tool of public diplomacy has undergone a significant transformation. Traditional methods of state-run media and controlled broadcasting have evolved to incorporate the dynamic, interactive, and decentralized nature of digital platforms. Understanding how Nigerian governments engages in international broadcasting of public diplomacy, the influence of social media on broadcasting public diplomacy, focusing on the advantages and disadvantages of controlling media outlets for diplomatic purposes and also covers the changing nature of global communication in this digital era. As countries navigate the complexities of international relations, the effectiveness of controlled media in shaping public perception and engagement raises significant questions worth exploring. The vast amount of content available can make it challenging to capture and retain audience attention. The ease of spreading false information on social media requires international broadcasters to maintain credibility and counteract misleading narratives. Addressing these challenges requires a comprehensive research that integrates digital communication tools, cultural sensitivity, cybersecurity measures and ongoing evaluation to enhance Nigeria’s international broadcasting of public diplomacy. This study employed a mixed-methods approach, combining qualitative and quantitative research methods. A content analysis of Nigeria’s international broadcasting content was conducted to assess its themes, narratives, and engagement strategies. Additionally, surveys and interviews with communications professionals, diplomats, and social media users were carried out to gather insights on perceptions and effectiveness of public diplomacy initiatives. It has highlighted some of the present trends in technology and the international environmental in which public diplomacy must work, and show how the past can illuminate the road for those navigating this new world. The rise of the social network creates more opportunities than it closes for public diplomacy. This evolution highlights the increasing importance of engagement, mutual understanding, and cooperation in international relations. By Adopting a more inclusive and participatory approach, public diplomacy can more effectively address global challenges and build stronger, more resilient relationships between nations. As Nigeria navigates the complexities of its international relations, this abstract will provide a vital examination of how it can better utilize the dual platforms of international broadcasting and social media in its public diplomacy efforts. The outcome will bear significance not only for Nigeria but also for other nations grappling with similar challenges in the digital age. As social media continues to play a crucial role in public diplomacy, understanding the dynamics of controlled media outlets becomes ever more critical. This abstract shed light on the advantages and disadvantages of such control, ultimately contributing valuable insights to practitioners in the field of diplomacy as they adapt to the rapidly changing communication landscape.

Keywords: international broadcasting, public diplomacy, social media, international relation, polities

Procedia PDF Downloads 31
83 Amyloid Angiopathy and Golf: Two Opposite but Close Worlds

Authors: Andrea Bertocchi, Alessio Barnaba Di Fonzo, Davide Talarico, Simone Rivaroli, Jeff Konin

Abstract:

The patient is a 89 years old male (180cm/85kg) retired notary former golfer with no past medical history. He describes a progressive ideomotor slowdown for 14 months. The disorder is characterized by short-term memory deficits and, for some months, also by unstable walking with a broad base with skidding and risk of falling at directional changes and urinary urgency. There were also episodes of aggression towards his wife and staff. At the time, the patient takes no prescribed medications. He has difficulty eating, dressing, and some problems with personal hygiene. In the initial visit, the patient was alert, cooperating, and performed simple tasks; however, he has a hearing impairment, slowed spontaneous speech, and amnestic deficit to the short story. Ideomotor apraxia is not present. He scored 20 points in the MMSE. From a motor function, he has deficits using Medical Research Council (MRC) 3-/5 in bilateral lower limbs and requires maximum assistance from sit to stand with existing premature fatigue. He’s unable to walk for about 1 month. Tremors and hypertonia are absent. BERG was unable to be administered, and BARTHEL was obtained 45/100. An Amyloid Angiopathy is suspected and then confirmed at the neurological examination. Therehabilitation objectives were the recovery of mobility and reinforcement of the UE/LE, especially legs, for recovery of standing and walking. The cognitive aspect was also an essential factor for the patient's recovery. The literature doesn’t demonstrate any particular studies regarding motor and cognitive rehabilitation on this pathology. Failing to manage his attention on exercise and tending to be disinterested and falling asleep constantly, we used golf-specific gestures to stimulate his mind to work and get results because the patient has memory recall of golf related movement. We worked for 4 months with a frequency of 3 sessions per week. Every session lasted for 45 minutes. After 4 months of work, the patient walked independently with the use of a stick for about 120 meters without stopping. MRC 4/5 AI bilaterally andpostural steps performed independently with supervision. BERG 36/56. BARTHEL 65/100. 6 Minutes Walking Test (6MWT), at the beginning, it wasn’t measurable, now, he performs 151,5m with Numeric Rating Scale 4 at the beginning and 7 at the end. Cognitively, he no longer has episodes of aggression, although the short-term memory and concentration deficit remains. Amyloid Angiopathy is a mix of motor and cognitive disorder. It is worth the thought that cerebral amyloid angiopathy manifests with functional deficits due to strokes and bleedings and, as such, has an important rehabilitation indication, as classical stroke is not associated with amyloidosis. Exploring the motor patterns learned at a young age and remained in the implicit and explicit memory of the patient allowed us to set up effective work and to obtain significant results in the short-middle term. Surely many studies will still be done regarding this pathology and its rehabilitation, but the importance of the cognitive sphere applied to the motor sphere could represent an important starting point.

Keywords: amyloid angiopathy, cognitive rehabilitation, golf, motor disorder

Procedia PDF Downloads 139
82 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 60
81 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 172
80 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection

Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten

Abstract:

Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.

Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection

Procedia PDF Downloads 336
79 Mineralized Nanoparticles as a Contrast Agent for Ultrasound and Magnetic Resonance Imaging

Authors: Jae Won Lee, Kyung Hyun Min, Hong Jae Lee, Sang Cheon Lee

Abstract:

To date, imaging techniques have attracted much attention in medicine because the detection of diseases at an early stage provides greater opportunities for successful treatment. Consequently, over the past few decades, diverse imaging modalities including magnetic resonance (MR), positron emission tomography, computed tomography, and ultrasound (US) have been developed and applied widely in the field of clinical diagnosis. However, each of the above-mentioned imaging modalities possesses unique strengths and intrinsic weaknesses, which limit their abilities to provide accurate information. Therefore, multimodal imaging systems may be a solution that can provide improved diagnostic performance. Among the current medical imaging modalities, US is a widely available real-time imaging modality. It has many advantages including safety, low cost and easy access for patients. However, its low spatial resolution precludes accurate discrimination of diseased region such as cancer sites. In contrast, MR has no tissue-penetrating limit and can provide images possessing exquisite soft tissue contrast and high spatial resolution. However, it cannot offer real-time images and needs a comparatively long imaging time. The characteristics of these imaging modalities may be considered complementary, and the modalities have been frequently combined for the clinical diagnostic process. Biominerals such as calcium carbonate (CaCO3) and calcium phosphate (CaP) exhibit pH-dependent dissolution behavior. They demonstrate pH-controlled drug release due to the dissolution of minerals in acidic pH conditions. In particular, the application of this mineralization technique to a US contrast agent has been reported recently. The CaCO3 mineral reacts with acids and decomposes to generate calcium dioxide (CO2) gas in an acidic environment. These gas-generating mineralized nanoparticles generated CO2 bubbles in the acidic environment of the tumor, thereby allowing for strong echogenic US imaging of tumor tissues. On the basis of this previous work, it was hypothesized that the loading of MR contrast agents into the CaCO3 mineralized nanoparticles may be a novel strategy in designing a contrast agent for dual imaging. Herein, CaCO3 mineralized nanoparticles that were capable of generating CO2 bubbles to trigger the release of entrapped MR contrast agents in response to tumoral acidic pH were developed for the purposes of US and MR dual-modality imaging of tumors. Gd2O3 nanoparticles were selected as an MR contrast agent. A key strategy employed in this study was to prepare Gd2O3 nanoparticle-loaded mineralized nanoparticles (Gd2O3-MNPs) using block copolymer-templated CaCO3 mineralization in the presence of calcium cations (Ca2+), carbonate anions (CO32-) and positively charged Gd2O3 nanoparticles. The CaCO3 core was considered suitable because it may effectively shield Gd2O3 nanoparticles from water molecules in the blood (pH 7.4) before decomposing to generate CO2 gas, triggering the release of Gd2O3 nanoparticles in tumor tissues (pH 6.4~7.4). The kinetics of CaCO3 dissolution and CO2 generation from the Gd2O3-MNPs were examined as a function of pH and pH-dependent in vitro magnetic relaxation; additionally, the echogenic properties were estimated to demonstrate the potential of the particles for the tumor-specific US and MR imaging.

Keywords: calcium carbonate, mineralization, ultrasound imaging, magnetic resonance imaging

Procedia PDF Downloads 238
78 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience

Authors: Eugenia Vomvoridi-Ivanovic

Abstract:

It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.

Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students

Procedia PDF Downloads 131
77 Reviving the Past, Enhancing the Future: Preservation of Urban Heritage Connectivity as a Tool for Developing Liveability in Historical Cities in Jordan, Using Salt City as a Case Study

Authors: Sahar Yousef, Chantelle Niblock, Gul Kacmaz

Abstract:

Salt City, in the context of Jordan’s heritage landscape, is a significant case to explore when it comes to the interaction between tangible and intangible qualities of liveable cities. Most city centers, including Jerash, Salt, Irbid, and Amman, are historical locations. Six of these extraordinary sites were designated UNESCO World Heritage Sites. Jordan is widely acknowledged as a developing country characterized by swift urbanization and unrestrained expansion that exacerbate the challenges associated with the preservation of historic urban areas. The aim of this study is to conduct an examination and analysis of the existing condition of heritage connectivity within heritage city centers. This includes outdoor staircases, pedestrian pathways, footpaths, and other public spaces. Case study-style analysis of the urban core of As-Salt is the focus of this investigation. Salt City is widely acknowledged for its substantial tangible and intangible cultural heritage and has been designated as ‘The Place of Tolerance and Urban Hospitality’ by UNESCO since 2021. Liveability in urban heritage, particularly in historic city centers, incorporates several factors that affect our well-being; its enhancement is a critical issue in contemporary society. The dynamic interaction between humans and historical materials, which serves as a vehicle for the expression of their identity and historical narrative, constitutes preservation that transcends simple conservation. This form of engagement enables people to appreciate the diversity of their heritage recognising their previous and planned futures. Heritage preservation is inextricably linked to a larger physical and emotional context; therefore, it is difficult to examine it in isolation. Urban environments, including roads, structures, and other infrastructure, are undergoing unprecedented physical design and construction requirements. Concurrently, heritage reinforces a sense of affiliation with a particular location or space and unifies individuals with their ancestry, thereby defining their identity. However, a considerable body of research has focused on the conservation of heritage buildings in a fragmented manner without considering their integration within a holistic urban context. Insufficient attention is given to the significance of the physical and social roles played by the heritage staircases and baths that serve as connectors between these valued historical buildings. In doing so, the research uses a methodology that is based on consensus. Given that liveability is considered a complex matter with several dimensions. The discussion starts by making initial observations on the physical context and societal norms inside the urban center while simultaneously establishing the definitions of liveability and connectivity and examining the key criteria associated with these concepts. Then, identify the key elements that contribute to liveable connectivity within the framework of urban heritage in Jordanian city centers. Some of the outcomes that will be discussed in the presentation are: (1) There is not enough connectivity between heritage buildings as can be seen, for example, between buildings in Jada and Qala'. (2) Most of the outdoor spaces suffer from physical issues that hinder their use by the public, like in Salalem. (3) Existing activities in the city center are not well attended because of lack of communication between the organisers and the citizens.

Keywords: connectivity, Jordan, liveability, salt city, tangible and intangible heritage, urban heritage

Procedia PDF Downloads 72
76 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 356
75 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 270
74 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators

Authors: Wei Ji

Abstract:

This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.

Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis

Procedia PDF Downloads 309
73 Effect of Climate Change on Rainfall Induced Failures for Embankment Slopes in Timor-Leste

Authors: Kuo Chieh Chao, Thishani Amarathunga, Sangam Shrestha

Abstract:

Rainfall induced slope failures are one of the most damaging and disastrous natural hazards which occur frequently in the world. This type of sliding mainly occurs in the zone above the groundwater level in silty/sandy soils. When the rainwater begins to infiltrate into the vadose zone of the soil, the negative pore-water pressure tends to decrease and reduce the shear strength of soil material. Climate change has resulted in excessive and unpredictable rainfall in all around the world, resulting in landslides with dire consequences to human lives and infrastructure. Such problems could be overcome by examining in detail the causes for such slope failures and recommending effective repair plans for vulnerable locations by considering future climatic change. The selected area for this study is located in the road rehabilitation section from Maubara to Mota Ain road in Timor-Leste. Slope failures and cracks have occurred in 2013 and after repairs reoccurred again in 2017 subsequent to heavy rains. Both observed and future predicted climate data analyses were conducted to understand the severe precipitation conditions in past and future. Observed climate data were collected from NOAA global climate data portal. CORDEX data portal was used to collect Regional Climate Model (RCM) future predicted climate data. Both observed and RCM data were extracted to location-based data using ArcGIS Software. Linear scaling method was used for the bias correction of future data and bias corrected climate data were assigned to GeoStudio Software. Precipitations of wet seasons (December to March ) in 2007 to 2013 is higher than 2001-2006 period and it is more than nearly 40% higher precipitation than usual monthly average precipitation of 160mm.The results of seepage analyses which were carried out using SEEP/W model with observed climate, clearly demonstrated that the pore water pressure within the fill slope was significantly increased due to the increase of the infiltration during the wet season of 2013.One main Regional Climate Models (RCM) was analyzed in order to predict future climate variation under two Representative Concentration Pathways (RCPs).In the projected period of 76 years ahead from 2014, shows that the amount of precipitation is considerably getting higher in the future in both RCP 4.5 and RCP 8.5 emission scenarios. Critical pore water pressure conditions during 2014-2090 were used in order to recommend appropriate remediation methods. Results of slope stability analyses indicated that the factor of safety of the fill slopes was reduced from 1.226 to 0.793 during the dry season to wet season in 2013.Results of future slope stability which were obtained using SLOPE/W model for the RCP emissions scenarios depict that, the use of tieback anchors and geogrids in slope protection could be effective in increasing the stability of slopes to an acceptable level during the wet seasons. Moreover, methods and procedures like monitoring of slopes showing signs or susceptible for movement and installing surface protections could be used to increase the stability of slopes.

Keywords: climate change, precipitation, SEEP/W, SLOPE/W, unsaturated soil

Procedia PDF Downloads 136
72 Trends in Blood Pressure Control and Associated Risk Factors Among US Adults with Hypertension from 2013 to 2020: Insights from NHANES Data

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Controlling blood pressure is critical to reducing the risk of cardiovascular disease. However, BP control rates (systolic BP < 140 mm Hg and diastolic BP < 90 mm Hg) have declined since 2013, warranting further analysis to identify contributing factors and potential interventions. This study investigates the factors associated with the decline in blood pressure (BP) control among U.S. adults with hypertension over the past decade. Data from the U.S. National Health and Nutrition Examination Survey (NHANES) were used to assess BP control trends between 2013 and 2020. The analysis included 18,927 U.S. adults with hypertension aged 18 years and older who completed study interviews and examinations. The dataset, obtained from the cardioStatsUSA and RNHANES R packages, was merged based on survey IDs. Key variables analyzed included demographic factors, lifestyle behaviors, hypertension status, BMI, comorbidities, antihypertensive medication use, and cardiovascular disease history. The prevalence of BP control declined from 78.0% in 2013-2014 to 71.6% in 2017-2020. Non-Hispanic Whites had the highest BP control prevalence (33.6% in 2013-2014), but this declined to 26.5% by 2017-2020. In contrast, BP control among Non-Hispanic Blacks increased slightly. Younger adults (aged 18-44) exhibited better BP control, but control rates declined over time. Obesity prevalence increased, contributing to poorer BP control. Antihypertensive medication use rose from 26.1% to 29.2% across the study period. Lifestyle behaviors, such as smoking and diet, also affected BP control, with nonsmokers and those with better diets showing higher control rates. Key findings indicate significant disparities in blood pressure control across racial/ethnic groups. Non-Hispanic Black participants had consistently higher odds (OR ranging from 1.84 to 2.33) of poor blood pressure control compared to Non-Hispanic Whites, while odds among Non-Hispanic Asians varied by cycle. Younger age groups (18-44 and 45-64) showed significantly lower odds of poor blood pressure control compared to those aged 75+, highlighting better control in younger populations. Men had consistently higher odds of poor control compared to women, though this disparity slightly decreased in 2017-2020. Medical comorbidities such as diabetes and chronic kidney disease were associated with significantly higher odds of poor blood pressure control across all cycles. Participants with chronic kidney disease had particularly elevated odds (OR=5.54 in 2015-2016), underscoring the challenge of managing hypertension in these populations. Antihypertensive medication use was also linked with higher odds of poor control, suggesting potential difficulties in achieving target blood pressure despite treatment. Lifestyle factors such as alcohol consumption and physical activity showed no consistent association with blood pressure control. However, dietary quality appeared protective, with those reporting an excellent diet showing lower odds (OR=0.64) of poor control in the overall sample. Increased BMI was associated with higher odds of poor blood pressure control, particularly in the 30-35 and 35+ BMI categories during 2015-2016. The study highlights a significant decline in BP control among U.S. adults with hypertension, particularly among certain demographic groups and those with increasing obesity rates. Lifestyle behaviors, antihypertensive medication use, and socioeconomic factors all played a role in these trends.

Keywords: diabetes, blood pressure, obesity, logistic regression, odd ratio

Procedia PDF Downloads 16
71 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering

Authors: Olia Kanevskaia

Abstract:

Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.

Keywords: private order, standardization, standard-setting organizations, transnational law

Procedia PDF Downloads 164
70 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 63
69 Regenerating Habitats. A Housing Based on Modular Wooden Systems

Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez

Abstract:

Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.

Keywords: modular, timber, flexibility, housing

Procedia PDF Downloads 80
68 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade

Authors: Bukra Kalayci, Geraldine Brennan

Abstract:

Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).

Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination

Procedia PDF Downloads 204
67 Top-Down, Middle-Out, Bottom-Up: A Design Approach to Transforming Prison

Authors: Roland F. Karthaus, Rachel S. O'Brien

Abstract:

Over the past decade, the authors have undertaken applied research aimed at enabling transformation within the prison service to improve conditions and outcomes for those living, working and visiting in prisons in the UK and the communities they serve. The research has taken place against a context of reducing resources and public discontent at increasing levels of violence, deteriorating conditions and persistently high levels of re-offending. Top-down governmental policies have mainly been ineffectual and in some cases counter-productive. The prison service is characterised by hierarchical organisation, and the research has applied design thinking at multiple levels to challenge and precipitate change: top-down, middle-out and bottom-up. The research employs three distinct but related approaches, system design (top-down): working at the national policy level to analyse the changing policy context, identifying opportunities and challenges; engaging with the Ministry of Justice commissioners and sector organisations to facilitate debate, introducing new evidence and provoking creative thinking, place-based design (middle-out): working with individual prison establishments as pilots to illustrate and test the potential for local empowerment, creative change, and improved architecture within place-specific contexts and organisational hierarchies, everyday design (bottom-up): working with individuals in the system to explore the potential for localised, significant, demonstrator changes; including collaborative design, capacity building and empowerment in skills, employment, communication, training, and other activities. The research spans a series of projects, through which the methodological approach has developed responsively. The projects include a place-based model for the re-purposing of Ministry of Justice land assets for the purposes of rehabilitation; an evidence-based guide to improve prison design for health and well-being; capacity-based employment, skills and self-build project as a template for future open prisons. The overarching research has enabled knowledge to be developed and disseminated through policy and academic networks. Whilst the research remains live and continuing; key findings are emerging as a basis for a new methodological approach to effecting change in the UK prison service. An interdisciplinary approach is necessary to overcome the barriers between distinct areas of the prison service. Sometimes referred to as total environments, prisons encompass entire social and physical environments which themselves are orchestrated by institutional arms of government, resulting in complex systems that cannot be meaningfully engaged through narrow disciplinary lenses. A scalar approach is necessary to connect strategic policies with individual experiences and potential, through the medium of individual prison establishments, operating as discrete entities within the system. A reflexive process is necessary to connect research with action in a responsive mode, learning to adapt as the system itself is changing. The role of individuals in the system, their latent knowledge and experience and their ability to engage and become agents of change are essential. Whilst the specific characteristics of the UK prison system are unique, the approach is internationally applicable.

Keywords: architecture, design, policy, prison, system, transformation

Procedia PDF Downloads 136
66 Overview of Research Contexts about XR Technologies in Architectural Practice

Authors: Adeline Stals

Abstract:

The transformation of architectural design practices has been underway for almost forty years due to the development and democratization of computer technology. New and more efficient tools are constantly being proposed to architects, amplifying a technological wave that sometimes stimulates them, sometimes overwhelms them, depending essentially on their digital culture and the context (socio-economic, structural, organizational) in which they work on a daily basis. Our focus is on VR, AR, and MR technologies dedicated to architecture. The commercialization of affordable headsets like the Oculus Rift, the HTC Vive or more low-tech like the Google CardBoard, makes it more accessible to benefit from these technologies. In that regard, researchers report the growing interest of these tools for architects, given the new perspectives they open up in terms of workflow, representation, collaboration, and client’s involvement. However, studies rarely mention the consequences of the sample studied on results. Our research provides an overview of VR, AR, and MR researches among a corpus of papers selected from conferences and journals. A closer look at the sample of these research projects highlights the necessity to take into consideration the context of studies in order to develop tools truly dedicated to the real practices of specific architect profiles. This literature review formalizes milestones for future challenges to address. The methodology applied is based on a systematic review of two sources of publications. The first one is the Cumincad database, which regroups publications from conferences exclusively about digital in architecture. Additionally, the second part of the corpus is based on journal publications. Journals have been selected considering their ranking on Scimago. Among the journals in the predefined category ‘architecture’ and in Quartile 1 for 2018 (last update when consulted), we have retained the ones related to the architectural design process: Design Studies, CoDesign, Architectural Science Review, Frontiers of Architectural Research and Archnet-IJAR. Beside those journals, IJAC, not classified in the ‘architecture’ category, is selected by the author for its adequacy with architecture and computing. For all requests, the search terms were ‘virtual reality’, ‘augmented reality’, and ‘mixed reality’ in title and/or keywords for papers published between 2015 and 2019 (included). This frame time is defined considering the fast evolution of these technologies in the past few years. Accordingly, the systematic review covers 202 publications. The literature review on studies about XR technologies establishes the state of the art of the current situation. It highlights that studies are mostly based on experimental contexts with controlled conditions (pedagogical, e.g.) or on practices established in large architectural offices of international renown. However, few studies focus on the strategies and practices developed by offices of smaller size, which represent the largest part of the market. Indeed, a European survey studying the architectural profession in Europe in 2018 reveals that 99% of offices are composed of less than ten people, and 71% of only one person. The study also showed that the number of medium-sized offices is continuously decreasing in favour of smaller structures. In doing so, a frontier seems to remain between the worlds of research and practice, especially for the majority of small architectural practices having a modest use of technology. This paper constitutes a reference for the next step of the research and for further worldwide researches by facilitating their contextualization.

Keywords: architectural design, literature review, SME, XR technologies

Procedia PDF Downloads 111
65 Pivoting to Fortify our Digital Self: Revealing the Need for Personal Cyber Insurance

Authors: Richard McGregor, Carmen Reaiche, Stephen Boyle

Abstract:

Cyber threats are a relatively recent phenomenon and offer cyber insurers a dynamic and intelligent peril. As individuals en mass become increasingly digitally dependent, Personal Cyber Insurance (PCI) offers an attractive option to mitigate cyber risk at a personal level. This abstract proposes a literature review that conceptualises a framework for siting Personal Cyber Insurance (PCI) within the context of cyberspace. The lack of empirical research within this domain demonstrates an immediate need to define the scope of PCI to allow cyber insurers to understand personal cyber risk threats and vectors, customer awareness, capabilities, and their associated needs. Additionally, this will allow cyber insurers to conceptualise appropriate frameworks allowing effective management and distribution of PCI products and services within a landscape often in-congruent with risk attributes commonly associated with traditional personal line insurance products. Cyberspace has provided significant improvement to the quality of social connectivity and productivity during past decades and allowed enormous capability uplift of information sharing and communication between people and communities. Conversely, personal digital dependency furnish ample opportunities for adverse cyber events such as data breaches and cyber-attacksthus introducing a continuous and insidious threat of omnipresent cyber risk–particularly since the advent of the COVID-19 pandemic and wide-spread adoption of ‘work-from-home’ practices. Recognition of escalating inter-dependencies, vulnerabilities and inadequate personal cyber behaviours have prompted efforts by businesses and individuals alike to investigate strategies and tactics to mitigate cyber risk – of which cyber insurance is a viable, cost-effective option. It is argued that, ceteris parabus, the nature of cyberspace intrinsically provides characteristic peculiarities that pose significant and bespoke challenges to cyber insurers, often in-congruent with risk attributes commonly associated with traditional personal line insurance products. These challenges include (inter alia) a paucity of historical claim/loss data for underwriting and pricing purposes, interdependencies of cyber architecture promoting high correlation of cyber risk, difficulties in evaluating cyber risk, intangibility of risk assets (such as data, reputation), lack of standardisation across the industry, high and undetermined tail risks, and moral hazard among others. This study proposes a thematic overview of the literature deemed necessary to conceptualise the challenges to issuing personal cyber coverage. There is an evident absence of empirical research appertaining to PCI and the design of operational business models for this business domain, especially qualitative initiatives that (1) attempt to define the scope of the peril, (2) secure an understanding of the needs of both cyber insurer and customer, and (3) to identify elements pivotal to effective management and profitable distribution of PCI - leading to an argument proposed by the author that postulates that the traditional general insurance customer journey and business model are ill-suited for the lineaments of cyberspace. The findings of the review confirm significant gaps in contemporary research within the domain of personal cyber insurance.

Keywords: cyberspace, personal cyber risk, personal cyber insurance, customer journey, business model

Procedia PDF Downloads 104
64 Memories of Lost Fathers: The Unfinished Transmission of Generational Values in Hungarian Cinema by Peter Falanga

Authors: Peter Falanga

Abstract:

During the process of de-Stalinization that began in 1956 with the Twentieth Congress of the Soviet Communist Party, many filmmakers in Hungary chose to explore their country’s political discomforts by using Socialist Realism as a negative model against which they could react to the dominating ideology. A renewed national film industry and a more permissive political regime would allow filmmakers to take to task the plight of the preceding generation who had experienced the fatal political turmoil of both World Wars and the purges of Stalin. What follows is no longer the multigenerational unity found in Socialist Realism wherein both the old and the young embrace Stalin’s revolutionary optimism; instead, the protagonists are parentless, and thus their connection to the previous generation is partially severed. In these films, violent historical forces leave one generation to search for both a connection with their family’s past, and for moral guidance to direct their future. István Szabó’s Father (1966), Márta Mészáros Diary for My Children (1984), and Pál Gábor’s Angi Vera (1978) each consider the fraught relationship between successive generations through the lens of postwar youth. A characteristic each of their protagonist’s share is that they are all missing one or both parents, and cope with familial loss either through recalling memories of their parents in dream-like sequences, or, in the case of Angi Vera, through embracing the surrogate paternalism that the Communist Party promises to provide. This paper considers the argument these films present about the progress of Hungarian history, and how this topic is explored in more recent films that similarly focus on the transmission of generational values. Scholars such as László Strausz and John Cunningham have written on the continuous concern with the transmission of generational values in more recent films such as István Szabó’s Sunshine (1999), Béla Tarr’s Werckmeister Harmonies (2000), György Pálfi’s Taxidermia (2006), Ágnes Kocsis’ Pál Adrienn (2010), and Kornél Mundruczó’s Evolution (2021). These films, they argue, make intimate portrayals of the various sweeping political changes in Hungary’s history and question how these epochs or events have impacted Hungarian identities. If these films attempt to personalize historical shifts of Hungary, then what is the significance of featuring characters who have lost one or both parents? An attempt to understand this coherent trend in Hungarian cinema will profit from examining the earlier, celebrated films of Szabó, Mészáros, and Gábor, who inaugurated this preoccupation with generational values. The pervasive interplay of dreams and memory in their films invites an additional element to their argument concerning historical progression. This paper incorporates Richard Teniman’s notion of the “dialectics of memory” in which memory is in a constant process of negation and reinvention to explain why these Directors prefer to explore Hungarian identity through the disarranged form of psychological realism over the linear causality structure of historical realism.

Keywords: film theory, Eastern European Studies, film history, Eastern European History

Procedia PDF Downloads 123
63 Structural Monitoring of Externally Confined RC Columns with Inadequate Lap-Splices, Using Fibre-Bragg-Grating Sensors

Authors: Petros M. Chronopoulos, Evangelos Z. Astreinidis

Abstract:

A major issue of the structural assessment and rehabilitation of existing RC structures is the inadequate lap-splicing of the longitudinal reinforcement. Although prohibited by modern Design Codes, the practice of arranging lap-splices inside the critical regions of RC elements was commonly applied in the past. Today this practice is still the rule, at least for conventional new buildings. Therefore, a lot of relevant research is ongoing in many earthquake prone countries. The rehabilitation of deficient lap-splices of RC elements by means of external confinement is widely accepted as the most efficient technique. If correctly applied, this versatile technique offers a limited increase of flexural capacity and a considerable increase of local ductility and of axial and shear capacities. Moreover, this intervention does not affect the stiffness of the elements and does not affect the dynamic characteristics of the structure. This technique has been extensively discussed and researched contributing to vast accumulation of technical and scientific knowledge that has been reported in relevant books, reports and papers, and included in recent Design Codes and Guides. These references are mostly dealing with modeling and redesign, covering both the enhanced (axial and) shear capacity (due to the additional external closed hoops or jackets) and the increased ductility (due to the confining action, preventing the unzipping of lap-splices and the buckling of continuous reinforcement). An analytical and experimental program devoted to RC members with lap-splices is completed in the Lab. of RC/NTU of Athens/GR. This program aims at the proposal of a rational and safe theoretical model and the calibration of the relevant Design Codes’ provisions. Tests, on forty two (42) full scale specimens, covering mostly beams and columns (not walls), strengthened or not, with adequate or inadequate lap-splices, have been already performed and evaluated. In this paper, the results of twelve (12) specimens under fully reversed cyclic actions are presented and discussed. In eight (8) specimens the lap-splices were inadequate (splicing length of 20 or 30 bar diameters) and they were retrofitted before testing by means of additional external confinement. The two (2) most commonly applied confining materials were used in this study, namely steel and FRPs. More specifically, jackets made of CFRP wraps or light cages made of mild steel were applied. The main parameters of these tests were (i) the degree of confinement (internal and external), and (ii) the length of lap-splices, equal to 20, 30 or 45 bar diameters. These tests were thoroughly instrumented and monitored, by means of conventional (LVDTs, strain gages, etc.) and innovative (optic fibre-Bragg-grating) sensors. This allowed for a thorough investigation of the most influencing design parameter, namely the hoop-stress developed in the confining material. Based on these test results and on comparisons with the provisions of modern Design Codes, it could be argued that shorter (than the normative) lap-splices, commonly found in old structures, could still be effective and safe (at least for lengths more than an absolute minimum), depending on the required ductility, if a properly arranged and adequately detailed external confinement is applied.

Keywords: concrete, fibre-Bragg-grating sensors, lap-splices, retrofitting / rehabilitation

Procedia PDF Downloads 250
62 Obstructive Bronchitis and Pneumonia by a Mixed Infection of HPIV- 3, S. pneumoniae in an Immunocompromised 10M Infant: Case Report

Authors: Olga Smilevska Spasova, Katerina Boshkovska, Gorica Popova, Mirjana Popovska

Abstract:

Introduction: Pneumonia is an infection of the pulmonary parenchyma. HPIV 3 is one of four viruses that is a member of the Paramyxoviridae family designated types 1-4 that have a nonsegmented, single-stranded RNA genome with a lipid-containing envelope. They are spread from the respiratory tract by aerosolized secretions or by direct contact with secretions. Type 3 is endemic and can cause serious illness in immunocompromised patients. Illness caused by parainfluenza occurs shortly after inoculation with the virus. The level of immunoglobulin A antibody in serum is the best predictor of susceptibility to infection. Streptococcus pneumonia or pneumococcus is a Gram-positive, spherical bacteria, usually found in pairs and it is a member of the genus Streptococcus. Streptococcus pneumonia resides asymptomatically in healthy carriers typically colonizing the respiratory tract, sinuses, and nasal cavity. In individuals with weaker immune systems like young infants, pneumococcal bacterium is the most common cause of community-acquired pneumonia in the world. Case Report: The aim is to present a case of lower respiratory tract infection in an infant caused by parainfluenza virus 3, S. pneumonia and undifferentiated gram-negative bacteria that was successfully treated. The infant is with a history of recurrent episodes of wheezing in the past 3mounts.Infant of 10months presents 2weeks before admittance with high fever, runny nose, and cough. The primary pediatrician prescribed oral cefpodoxime for 10days and inhaled salbutamol. Two days before admittance in hospital the infant with high fever, cough, and difficulty breathing. At admittance, infant is pale, anxious with rapid respirations, cough, wheezing and tachycardia. On auscultation: vesicular breathing sounds with high pitched wheezing and on the right coarse crackles. Investigations: Blood analysis: RBC: 4, 7 x1012L, WBC: 8,3x109L: Neut: 42.73% Lym: 41.57%, Hgb: 9.38 g/dl MCV: 62.7fl, MCH: 20.0pg MCHC: 31.8 g/dl RDW: 18.7% Plt-307.9 x109LCRP: 2,5mg/l, serum iron-7.92umol/l, O2sat-97% on blood gas analysis, puls-125/min.X-ray of chest with hyperinflationand right pericardial consolidation. Microbiological analysis of sputum sample is positive for undifferentiated gram-negative bacteria (colonizer)–resistant to cefotaxime, ampicillin, cefoxitin, sulfamet.+trimetoprim and sensitive to amikacin, gentamicin, and ciprofloxacin. Molecular multiplex RT-PCR for 19 viruses and multiplex PCR for 7 bacteria test for respiratory pathogens positive for Parainfluenza virus 3(Ct=22.73), Streptococcus pneumonia (Ct=26.75).IED: IgG-9.31g/l, IgA-0.351g/l, IgM-0.86g/l. Therapy: Treatment was started with inhaled salbutamol, intravenous antibiotic cefotaxime as well as systemic corticosteroids. On day 7 because of slow clinical resolution of chest auscultation findings and an etiologic clue with a positive sputum sample for resistant undifferentiated gram negative bacteria, a second intravenous antibiotic was administered amikacin. The infant is discharged on day 14 with resolution of clinical findings. Conclusion: Mixed co-infections with respiratory viruses and bacteria in immunocompromised infants are likely to lead to a more severe form of community acquired pneumonia that will need hospitalization.

Keywords: HPIV- 3, infant, pneumonia, S. pneumonia, x-ray chest

Procedia PDF Downloads 76
61 The Science of Health Care Delivery: Improving Patient-Centered Care through an Innovative Education Model

Authors: Alison C. Essary, Victor Trastek

Abstract:

Introduction: The current state of the health care system in the U.S. is characterized by an unprecedented number of people living with multiple chronic conditions, unsustainable rise in health care costs, inadequate access to care, and wide variation in health outcomes throughout the country. An estimated two-thirds of Americans are living with two or more chronic conditions, contributing to 75% of all health care spending. In 2013, the School for the Science of Health Care Delivery (SHCD) was charged with redesigning the health care system through education and research. Faculty in business, law, and public policy, and thought leaders in health care delivery, administration, public health and health IT created undergraduate, graduate, and executive academic programs to address this pressing need. Faculty and students work across disciplines, and with community partners and employers to improve care delivery and increase value for patients. Methods: Curricula apply content in health care administration and operations within the clinical context. Graduate modules are team-taught by faculty across academic units to model team-based practice. Seminars, team-based assignments, faculty mentoring, and applied projects are integral to student success. Cohort-driven models enhance networking and collaboration. This observational study evaluated two years of admissions data, and one year of graduate data to assess program outcomes and inform the current graduate-level curricula. Descriptive statistics includes means, percentages. Results: Fall 2013, the program received 51 applications. The mean GPA of the entering class of 37 students was 3.38. Ninety-seven percent of the fall 2013 cohort successfully completed the program (n=35). Sixty-six percent are currently employed in the health care industry (n=23). Of the remaining 12 graduates, two successfully matriculated to medical school; one works in the original field of study; four await results on the MCAT or DAT, and five were lost to follow up. Attrition of one student was attributed to non-academic reasons. Fall 2014, the program expanded to include both on-ground and online cohorts. Applications were evenly distributed between on-ground (n=70) and online (n=68). Thirty-eight students enrolled in the on-ground program. The mean GPA was 3.95. Ninety-five percent of students successfully completed the program (n=36). Thirty-six students enrolled in the online program. The mean GPA was 3.85. Graduate outcomes are pending. Discussion: Challenges include demographic variability between online and on-ground students; yet, both profiles are similar in that students intend to become change agents in the health care system. In the past two years, on-ground applications increased by 31%, persistence to graduation is > 95%, mean GPA is 3.67, graduates report admission to six U.S. medical schools, the Mayo Medical School integrates SHCD content within their curricula, and there is national interest in collaborating on industry and academic partnerships. This places SHCD at the forefront of developing innovative curricula in order to improve high-value, patient-centered care.

Keywords: delivery science, education, health care delivery, high-value care, innovation in education, patient-centered

Procedia PDF Downloads 284
60 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 211