Search results for: random fields
84 An Empirical Study of Determinants Influencing Telemedicine Services Acceptance by Healthcare Professionals: Case of Selected Hospitals in Ghana
Authors: Jonathan Kissi, Baozhen Dai, Wisdom W. K. Pomegbe, Abdul-Basit Kassim
Abstract:
Protecting patient’s digital information is a growing concern for healthcare institutions as people nowadays perpetually live their lives through telemedicine services. These telemedicine services have been confronted with several determinants that hinder their successful implementations, especially in developing countries. Identifying such determinants that influence the acceptance of telemedicine services is also a problem for healthcare professionals. Despite the tremendous increase in telemedicine services, its adoption, and use has been quite slow in some healthcare settings. Generally, it is accepted in today’s globalizing world that the success of telemedicine services relies on users’ satisfaction. Satisfying health professionals and patients are one of the crucial objectives of telemedicine success. This study seeks to investigate the determinants that influence health professionals’ intention to utilize telemedicine services in clinical activities in a sub-Saharan African country in West Africa (Ghana). A hybridized model comprising of health adoption models, including technology acceptance theory, diffusion of innovation theory, and protection of motivation theory, were used to investigate these quandaries. The study was carried out in four government health institutions that apply and regulate telemedicine services in their clinical activities. A structured questionnaire was developed and used for data collection. Purposive and convenience sampling methods were used in the selection of healthcare professionals from different medical fields for the study. The collected data were analyzed based on structural equation modeling (SEM) approach. All selected constructs showed a significant relationship with health professional’s behavioral intention in the direction expected from prior literature including perceived usefulness, perceived ease of use, management strategies, financial sustainability, communication channels, patients security threat, patients privacy risk, self efficacy, actual service use, user satisfaction, and telemedicine services systems securities threat. Surprisingly, user characteristics and response efficacy of health professionals were not significant in the hybridized model. The findings and insights from this research show that health professionals are pragmatic when making choices for technology applications and also their willingness to use telemedicine services. They are, however, anxious about its threats and coping appraisals. The identified significant constructs in the study may help to increase efficiency, quality of services, quality patient care delivery, and satisfactory user satisfaction among healthcare professionals. The implantation and effective utilization of telemedicine services in the selected hospitals will aid as a strategy to eradicate hardships in healthcare services delivery. The service will help attain universal health access coverage to all populace. This study contributes to empirical knowledge by identifying the vital factors influencing health professionals’ behavioral intentions to adopt telemedicine services. The study will also help stakeholders of healthcare to formulate better policies towards telemedicine service usage.Keywords: telemedicine service, perceived usefulness, perceived ease of use, management strategies, security threats
Procedia PDF Downloads 14283 (Anti)Depressant Effects of Non-Steroidal Antiinflammatory Drugs in Mice
Authors: Horia Păunescu
Abstract:
Purpose: The study aimed to assess the depressant or antidepressant effects of several Nonsteroidal Anti-Inflammatory Drugs (NSAIDs) in mice: the selective cyclooxygenase-2 (COX-2) inhibitor meloxicam, and the non-selective COX-1 and COX-2 inhibitors lornoxicam, sodium metamizole, and ketorolac. The current literature data regarding such effects of these agents are scarce. Materials and methods: The study was carried out on NMRI mice weighing 20-35 g, kept in a standard laboratory environment. The study was approved by the Ethics Committee of the University of Medicine and Pharmacy „Carol Davila”, Bucharest. The study agents were injected intraperitoneally, 10 mL/kg body weight (bw) 1 hour before the assessment of the locomotor activity by cage testing (n=10 mice/ group) and 2 hours before the forced swimming tests (n=15). The study agents were dissolved in normal saline (meloxicam, sodium metamizole), ethanol 11.8% v/v in normal saline (ketorolac), or water (lornoxicam), respectively. Negative and positive control agents were also given (amitryptilline in the forced swimming test). The cage floor used in the locomotor activity assessment was divided into 20 equal 10 cm squares. The forced swimming test involved partial immersion of the mice in cylinders (15/9cm height/diameter) filled with water (10 cm depth at 28C), where they were left for 6 minutes. The cage endpoint used in the locomotor activity assessment was the number of treaded squares. Four endpoints were used in the forced swimming test (immobility latency for the entire 6 minutes, and immobility, swimming, and climbing scores for the final 4 minutes of the swimming session), recorded by an observer that was "blinded" to the experimental design. The statistical analysis used the Levene test for variance homogeneity, ANOVA and post-hoc analysis as appropriate, Tukey or Tamhane tests.Results: No statistically significant increase or decrease in the number of treaded squares was seen in the locomotor activity assessment of any mice group. In the forced swimming test, amitryptilline showed an antidepressant effect in each experiment, at the 10 mg/kg bw dosage. Sodium metamizole was depressant at 100 mg/kg bw (increased the immobility score, p=0.049, Tamhane test), but not in lower dosages as well (25 and 50 mg/kg bw). Ketorolac showed an antidepressant effect at the intermediate dosage of 5 mg/kg bw, but not so in the dosages of 2.5 and 10 mg/kg bw, respectively (increased the swimming score, p=0.012, Tamhane test). Meloxicam and lornoxicam did not alter the forced swimming endpoints at any dosage level. Discussion: 1) Certain NSAIDs caused changes in the forced swimming patterns without interfering with locomotion. 2) Sodium metamizole showed a depressant effect, whereas ketorolac proved antidepressant. Conclusion: NSAID-induced mood changes are not class effects of these agents and apparently are independent of the type of inhibited cyclooxygenase (COX-1 or COX-2). Disclosure: This paper was co-financed from the European Social Fund, through the Sectorial Operational Programme Human Resources Development 2007-2013, project number POSDRU /159 /1.5 /S /138907 "Excellence in scientific interdisciplinary research, doctoral and postdoctoral, in the economic, social and medical fields -EXCELIS", coordinator The Bucharest University of Economic Studies.Keywords: antidepressant, depressant, forced swim, NSAIDs
Procedia PDF Downloads 23582 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms
Authors: Joy Chadwick
Abstract:
There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge
Procedia PDF Downloads 7981 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce
Authors: Stephan Boehm, Julia Engel, Judith Eisser
Abstract:
During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping
Procedia PDF Downloads 45980 Fabrication of Antimicrobial Dental Model Using Digital Light Processing (DLP) Integrated with 3D-Bioprinting Technology
Authors: Rana Mohamed, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab
Abstract:
Background: Bio-fabrication is a multidisciplinary research field that combines several principles, fabrication techniques, and protocols from different fields. The open-source-software movement is a movement that supports the use of open-source licenses for some or all software as part of the broader notion of open collaboration. Additive manufacturing is the concept of 3D printing, where it is a manufacturing method through adding layer-by-layer using computer-aided designs (CAD). There are several types of AM system used, and they can be categorized by the type of process used. One of these AM technologies is Digital light processing (DLP) which is a 3D printing technology used to rapidly cure a photopolymer resin to create hard scaffolds. DLP uses a projected light source to cure (Harden or crosslinking) the entire layer at once. Current applications of DLP are focused on dental and medical applications. Other developments have been made in this field, leading to the revolutionary field 3D bioprinting. The open-source movement was started to spread the concept of open-source software to provide software or hardware that is cheaper, reliable, and has better quality. Objective: Modification of desktop 3D printer into 3D bio-printer and the integration of DLP technology and bio-fabrication to produce an antibacterial dental model. Method: Modification of a desktop 3D printer into a 3D bioprinter. Gelatin hydrogel and sodium alginate hydrogel were prepared with different concentrations. Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum were extracted, and extractions were selected on different levels (Powder, aqueous extracts, total oils, and Essential oils) prepared for antibacterial bioactivity. Agar well diffusion method along with the E. coli have been used to perform the sensitivity test for the antibacterial activity of the extracts acquired by Zingiber officinale, Syzygium aromaticum, and Allium sativum. Lastly, DLP printing was performed to produce several dental models with the natural extracted combined with hydrogel to represent and simulate the Hard and Soft tissues. Result: The desktop 3D printer was modified into 3D bioprinter using open-source software Marline and modified custom-made 3D printed parts. Sodium alginate hydrogel and gelatin hydrogel were prepared at 5% (w/v), 10% (w/v), and 15%(w/v). Resin integration with the natural extracts of Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum was done following the percentage 1- 3% for each extract. Finally, the Antimicrobial dental model was printed; exhibits the antimicrobial activity, followed by merging with sodium alginate hydrogel. Conclusion: The open-source movement was successful in modifying and producing a low-cost Desktop 3D Bioprinter showing the potential of further enhancement in such scope. Additionally, the potential of integrating the DLP technology with bioprinting is a promising step toward the usage of the antimicrobial activity using natural products.Keywords: 3D printing, 3D bio-printing, DLP, hydrogel, antibacterial activity, zingiber officinale, syzygium aromaticum, allium sativum, panax ginseng, dental applications
Procedia PDF Downloads 9679 Ethical Decision-Making in AI and Robotics Research: A Proposed Model
Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet
Abstract:
Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.Keywords: ethical decision making, artificial intelligence, robotics, research
Procedia PDF Downloads 7978 Navigating the Future: Evaluating the Market Potential and Drivers for High-Definition Mapping in the Autonomous Vehicle Era
Authors: Loha Hashimy, Isabella Castillo
Abstract:
In today's rapidly evolving technological landscape, the importance of precise navigation and mapping systems cannot be understated. As various sectors undergo transformative changes, the market potential for Advanced Mapping and Management Systems (AMMS) emerges as a critical focus area. The Galileo/GNSS-Based Autonomous Mobile Mapping System (GAMMS) project, specifically targeted toward high-definition mapping (HDM), endeavours to provide insights into this market within the broader context of the geomatics and navigation fields. With the growing integration of Autonomous Vehicles (AVs) into our transportation systems, the relevance and demand for sophisticated mapping solutions like HDM have become increasingly pertinent. The research employed a meticulous, lean, stepwise, and interconnected methodology to ensure a comprehensive assessment. Beginning with the identification of pivotal project results, the study progressed into a systematic market screening. This was complemented by an exhaustive desk research phase that delved into existing literature, data, and trends. To ensure the holistic validity of the findings, extensive consultations were conducted. Academia and industry experts provided invaluable insights through interviews, questionnaires, and surveys. This multi-faceted approach facilitated a layered analysis, juxtaposing secondary data with primary inputs, ensuring that the conclusions were both accurate and actionable. Our investigation unearthed a plethora of drivers steering the HD maps landscape. These ranged from technological leaps, nuanced market demands, and influential economic factors to overarching socio-political shifts. The meteoric rise of Autonomous Vehicles (AVs) and the shift towards app-based transportation solutions, such as Uber, stood out as significant market pull factors. A nuanced PESTEL analysis further enriched our understanding, shedding light on political, economic, social, technological, environmental, and legal facets influencing the HD maps market trajectory. Simultaneously, potential roadblocks were identified. Notable among these were barriers related to high initial costs, concerns around data quality, and the challenges posed by a fragmented and evolving regulatory landscape. The GAMMS project serves as a beacon, illuminating the vast opportunities that lie ahead for the HD mapping sector. It underscores the indispensable role of HDM in enhancing navigation, ensuring safety, and providing pinpoint, accurate location services. As our world becomes more interconnected and reliant on technology, HD maps emerge as a linchpin, bridging gaps and enabling seamless experiences. The research findings accentuate the imperative for stakeholders across industries to recognize and harness the potential of HD mapping, especially as we stand on the cusp of a transportation revolution heralded by Autonomous Vehicles and advanced geomatic solutions.Keywords: high-definition mapping (HDM), autonomous vehicles, PESTEL analysis, market drivers
Procedia PDF Downloads 8577 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task
Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes
Abstract:
For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.Keywords: Alzheimer's disease, keystroke logging, matching, writing process
Procedia PDF Downloads 36676 Inverse Problem Method for Microwave Intrabody Medical Imaging
Authors: J. Chamorro-Servent, S. Tassani, M. A. Gonzalez-Ballester, L. J. Roca, J. Romeu, O. Camara
Abstract:
Electromagnetic and microwave imaging (MWI) have been used in medical imaging in the last years, being the most common applications of breast cancer and stroke detection or monitoring. In those applications, the subject or zone to observe is surrounded by a number of antennas, and the Nyquist criterium can be satisfied. Additionally, the space between the antennas (transmitting and receiving the electromagnetic fields) and the zone to study can be prepared in a homogeneous scenario. However, this may differ in other cases as could be intracardiac catheters, stomach monitoring devices, pelvic organ systems, liver ablation monitoring devices, or uterine fibroids’ ablation systems. In this work, we analyzed different MWI algorithms to find the most suitable method for dealing with an intrabody scenario. Due to the space limitations usually confronted on those applications, the device would have a cylindrical configuration of a maximum of eight transmitters and eight receiver antennas. This together with the positioning of the supposed device inside a body tract impose additional constraints in order to choose a reconstruction method; for instance, it inhabitants the use of well-known algorithms such as filtered backpropagation for diffraction tomography (due to the unusual configuration with probes enclosed by the imaging region). Finally, the difficulty of simulating a realistic non-homogeneous background inside the body (due to the incomplete knowledge of the dielectric properties of other tissues between the antennas’ position and the zone to observe), also prevents the use of Born and Rytov algorithms due to their limitations with a heterogeneous background. Instead, we decided to use a time-reversed algorithm (mostly used in geophysics) due to its characteristics of ignoring heterogeneities in the background medium, and of focusing its generated field onto the scatters. Therefore, a 2D time-reversed finite difference time domain was developed based on the time-reversed approach for microwave breast cancer detection. Simultaneously an in-silico testbed was also developed to compare ground-truth dielectric properties with corresponding microwave imaging reconstruction. Forward and inverse problems were computed varying: the frequency used related to a small zone to observe (7, 7.5 and 8 GHz); a small polyp diameter (5, 7 and 10 mm); two polyp positions with respect to the closest antenna (aligned or disaligned); and the (transmitters-to-receivers) antenna combination used for the reconstruction (1-1, 8-1, 8-8 or 8-3). Results indicate that when using the existent time-reversed method for breast cancer here for the different combinations of transmitters and receivers, we found false positives due to the high degrees of freedom and unusual configuration (and the possible violation of Nyquist criterium). Those false positives founded in 8-1 and 8-8 combinations, highly reduced with the 1-1 and 8-3 combination, being the 8-3 configuration de most suitable (three neighboring receivers at each time). The 8-3 configuration creates a region-of-interest reduced problem, decreasing the ill-posedness of the inverse problem. To conclude, the proposed algorithm solves the main limitations of the described intrabody application, successfully detecting the angular position of targets inside the body tract.Keywords: FDTD, time-reversed, medical imaging, microwave imaging
Procedia PDF Downloads 12775 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock
Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer
Abstract:
Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.Keywords: redensification, SME, urban development, wood building system
Procedia PDF Downloads 11174 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues
Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos
Abstract:
Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints
Procedia PDF Downloads 9273 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules
Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel
Abstract:
The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling
Procedia PDF Downloads 13372 Common Space Production as a Solution to the Affordable Housing Problem: Its Relationship with the Squating Process in Turkey
Authors: Gözde Arzu Sarıcan
Abstract:
Contemporary urbanization processes and spatial transformations are intensely debated across various fields of social sciences. One prominent concept in these discussions is "common spaces." Common spaces offer a critical theoretical framework, particularly for addressing the social and economic inequalities brought about by urbanization. This study examines the processes of commoning and their impacts through the lens of squatter neighborhoods in Turkey, emphasizing the importance of affordable housing. It focuses on the role and significance of these neighborhoods in the formation of common spaces, analyzing the collective actions and resistance strategies of residents. This process, which began with the construction of shelters to meet the shelter needs of low-income households migrating from rural to urban areas, has turned into low-quality squatter settlements over time. For low-income households lacking the economic power to rent or buy homes in the city, these areas provided an affordable housing solution. Squatter neighborhoods reflect the efforts of local communities to protect and develop their communal living spaces through collective actions and resistance strategies. This collective creation process involves the appropriation of occupied land as a common resource through the rules established by the commons. Organized occupations subdivide these lands, shaped through collective creation processes. For the squatter communities striving for economic and social adaptation, these areas serve as buffer zones for urban integration. In squatter neighborhoods, bonds of friendship, kinship, and compatriotism are strong, playing a significant role in the creation and dissemination of collective knowledge. Squatter areas can be described as common spaces that emerge out of necessity for low-income and marginalized groups. The design and construction of housing in squatter neighborhoods are shaped by the collective participation and skills of the residents. Streets are formed through collective decision-making and labor. Over time, the demands for housing are communicated to local authorities, enhancing the potential for commoning. Common spaces are shaped by collective needs and demands, appropriated, and transformed into potential new spaces. Common spaces are continually redefined and recreated. In this context, affordable housing becomes an essential aspect of these common spaces, providing a foundation for social and economic stability. This study evaluates the processes of commoning and their effects through the lens of squatter neighborhoods in Turkey. Communities living in squatter neighborhoods have managed to create and protect communal living spaces, especially in situations where official authorities have been inadequate. Common spaces are built on values such as solidarity, cooperation, and collective resistance. In urban planning and policy development processes, it is crucial to consider the concept of common spaces. Policies that support the collective efforts and resistance strategies of communities can contribute to more just and sustainable living conditions in urban areas. In this context, the concept of common spaces is considered an important tool in the fight against urban inequalities and in the expression and defense mechanisms of communities. By emphasizing the importance of affordable housing within these spaces, this study highlights the critical role of common spaces in addressing urban social and economic challenges.Keywords: affordable housing, common space, squating process, turkey
Procedia PDF Downloads 3571 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 18370 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 13569 Absenteeism in Polytechnical University Studies: Quantification and Identification of the Causes at Universitat Politècnica de Catalunya
Authors: E. Mas de les Valls, M. Castells-Sanabra, R. Capdevila, N. Pla, Rosa M. Fernandez-Canti, V. de Medina, A. Mujal, C. Barahona, E. Velo, M. Vigo, M. A. Santos, T. Soto
Abstract:
Absenteeism in universities, including polytechnical universities, is influenced by a variety of factors. Some factors overlap with those causing absenteeism in schools, while others are specific to the university and work-related environments. Indeed, these factors may stem from various sources, including students, educators, the institution itself, or even the alignment of degree curricula with professional requirements. In Spain, there has been an increase in absenteeism in polytechnical university studies, especially after the Covid crisis, posing a significant challenge for institutions to address. This study focuses on Universitat Politècnica de Catalunya• BarcelonaTech (UPC) and aims to quantify the current level of absenteeism and identify its main causes. The study is part of the teaching innovation project ASAP-UPC, which aims to minimize absenteeism through the redesign of teaching methodologies. By understanding the factors contributing to absenteeism, the study seeks to inform the subsequent phases of the ASAP-UPC project, which involve implementing methodologies to minimize absenteeism and evaluating their effectiveness. The study utilizes surveys conducted among students and polytechnical companies. Students' perspectives are gathered through both online surveys and in-person interviews. The surveys inquire about students' interest in attending classes, skill development throughout their UPC experience, and their perception of the skills required for a career in a polytechnical field. Additionally, polytechnical companies are surveyed regarding the skills they seek in prospective employees. The collected data is then analyzed to identify patterns and trends. This analysis involves organizing and categorizing the data, identifying common themes, and drawing conclusions based on the findings. This mixed-method approach has revealed that higher levels of absenteeism are observed in large student groups at both the Bachelor's and Master's degree levels. However, the main causes of absenteeism differ between these two levels. At the Bachelor's level, many students express dissatisfaction with in-person classes, perceiving them as overly theoretical and lacking a balance between theory, experimental practice, and problem-solving components. They also find a lack of relevance to professional needs. Consequently, they resort to using online available materials developed during the Covid crisis and attending private academies for exam preparation instead. On the other hand, at the Master's level, absenteeism primarily arises from schedule incompatibility between university and professional work. There is a discrepancy between the skills highly valued by companies and the skills emphasized during the studies, aligning partially with students' perceptions. These findings are of theoretical importance as they shed light on areas that can be improved to offer a more beneficial educational experience to students at UPC. The study also has potential applicability to other polytechnic universities, allowing them to adapt the surveys and apply the findings to their specific contexts. By addressing the identified causes of absenteeism, universities can enhance the educational experience and better prepare students for successful careers in polytechnical fields.Keywords: absenteeism, polytechnical studies, professional skills, university challenges
Procedia PDF Downloads 6968 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience
Authors: Eugenia Vomvoridi-Ivanovic
Abstract:
It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students
Procedia PDF Downloads 13167 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base
Authors: Joanna Wielochowska, Aneta Wielochowska
Abstract:
Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia
Procedia PDF Downloads 12666 Construction Engineering and Cocoa Agriculture: A Synergistic Approach for Improved Livelihoods of Farmers
Authors: Felix Darko-Amoah, Daniel Acquah
Abstract:
In contemporary ecosystems for developing countries like Ghana, the need to explore innovative solutions for sustainable livelihoods of farmers is more important than ever. With Ghana’s population growing steadily and the demand for food, fiber and shelter increasing, it is imperative that the construction industry and agriculture come together to address the challenges faced by farmers in the country. In order to enhance the livelihoods of cocoa farmers in Ghana, this paper provides an innovative strategy that aims to integrate the areas of civil engineering and cash crop agriculture. This study focuses on cocoa cultivation in poorer nations, where farmers confront a variety of difficulties include restricted access to financing, subpar infrastructure, and insufficient support services. We seek to improve farmers' access to financing, improve infrastructure, and provide support services that are essential to their success by combining the fields of building engineering and cocoa production. The findings of the study are beneficial to cocoa producers, community extension agents, and construction engineers. In order to accomplish our objectives, we conducted 307 of field investigations in particular cocoa growing communities in the Western Region of Ghana. Several studies have shown that there is a lack of adequate infrastructure and financing, leading to low yields, subpar beans, and low farmer profitability in developing nations like Ghana. Our goal is to give farmers access to better infrastructure, better financing, and support services that are crucial to their success through the fusion of construction engineering and cocoa production. Based on data gathered from the field investigations, the results show that the employment of appropriate technology and methods for developing structures, roads, and other infrastructure in rural regions is one of the essential components of this strategy. For instance, we find that using affordable, environmentally friendly materials like bamboo, rammed earth, and mud bricks can assist to cut expenditures while also protecting the environment. By applying simple relational techniques to the data gathered, the results also show that construction engineers are crucial in planning and building infrastructure that is appropriate for the local environment and circumstances and resilient to natural disasters like floods. Thus, the convergence of construction engineering and cash crop cultivation is another crucial component of the agriculture-construction interplay. For instance, farmers can receive financial assistance to buy essential inputs, such as seeds, fertilizer, and tools, as well as training in proper farming methods. Moreover, extension services can be offered to assist farmers in marketing their crops and enhancing their livelihoods and revenue. In conclusion, our analysis of responses from the 307 participants depicts that the combination of construction engineering and cash crop agriculture offers an innovative approach to improving farmers' livelihoods in cocoa farming communities in Ghana. In conclusion, by inculcating the findings of this study into core decision-making, policymakers can help farmers build sustainable and profitable livelihoods by addressing challenges such as limited access to financing, poor infrastructure, and inadequate support services.Keywords: cocoa agriculture, construction engineering, farm buildings and equipment, improved livelihoods of farmers
Procedia PDF Downloads 9165 A Bioinspired Anti-Fouling Coating for Implantable Medical Devices
Authors: Natalie Riley, Anita Quigley, Robert M. I. Kapsa, George W. Greene
Abstract:
As the fields of medicine and bionics grow rapidly in technological advancement, the future and success of it depends on the ability to effectively interface between the artificial and the biological worlds. The biggest obstacle when it comes to implantable, electronic medical devices, is maintaining a ‘clean’, low noise electrical connection that allows for efficient sharing of electrical information between the artificial and biological systems. Implant fouling occurs with the adhesion and accumulation of proteins and various cell types as a result of the immune response to protect itself from the foreign object, essentially forming an electrical insulation barrier that often leads to implant failure over time. Lubricin (LUB) functions as a major boundary lubricant in articular joints, a unique glycoprotein with impressive anti-adhesive properties that self-assembles to virtually any substrate to form a highly ordered, ‘telechelic’ polymer brush. LUB does not passivate electroactive surfaces which makes it ideal, along with its innate biocompatibility, as a coating for implantable bionic electrodes. It is the aim of the study to investigate LUB’s anti-fouling properties and its potential as a safe, bioinspired material for coating applications to enhance the performance and longevity of implantable medical devices as well as reducing the frequency of implant replacement surgeries. Native, bovine-derived LUB (N-LUB) and recombinant LUB (R-LUB) were applied to gold-coated mylar surfaces. Fibroblast, chondrocyte and neural cell types were cultured and grown on the coatings under both passive and electrically stimulated conditions to test the stability and anti-adhesive property of the LUB coating in the presence of an electric field. Lactate dehydrogenase (LDH) assays were conducted as a directly proportional cell population count on each surface along with immunofluorescent microscopy to visualize cells. One-way analysis of variance (ANOVA) with post-hoc Tukey’s test was used to test for statistical significance. Under both passive and electrically stimulated conditions, LUB significantly reduced cell attachment compared to bare gold. Comparing the two coating types, R-LUB reduced cell attachment significantly compared to its native counterpart. Immunofluorescent micrographs visually confirmed LUB’s antiadhesive property, R-LUB consistently demonstrating significantly less attached cells for both fibroblasts and chondrocytes. Preliminary results investigating neural cells have so far demonstrated that R-LUB has little effect on reducing neural cell attachment; the study is ongoing. Recombinant LUB coatings demonstrated impressive anti-adhesive properties, reducing cell attachment in fibroblasts and chondrocytes. These findings and the availability of recombinant LUB brings into question the results of previous experiments conducted using native-derived LUB, its potential not adequately represented nor realized due to unknown factors and impurities that warrant further study. R-LUB is stable and maintains its anti-fouling property under electrical stimulation, making it suitable for electroactive surfaces.Keywords: anti-fouling, bioinspired, cell attachment, lubricin
Procedia PDF Downloads 12464 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 12563 Holistic Urban Development: Incorporating Both Global and Local Optimization
Authors: Christoph Opperer
Abstract:
The rapid urbanization of modern societies and the need for sustainable urban development demand innovative solutions that meet both individual and collective needs while addressing environmental concerns. To address these challenges, this paper presents a study that explores the potential of spatial and energetic/ecological optimization to enhance the performance of urban settlements, focusing on both architectural and urban scales. The study focuses on the application of biological principles and self-organization processes in urban planning and design, aiming to achieve a balance between ecological performance, architectural quality, and individual living conditions. The research adopts a case study approach, focusing on a 10-hectare brownfield site in the south of Vienna. The site is surrounded by a small-scale built environment as an appropriate starting point for the research and design process. However, the selected urban form is not a prerequisite for the proposed design methodology, as the findings can be applied to various urban forms and densities. The methodology used in this research involves dividing the overall building mass and program into individual small housing units. A computational model has been developed to optimize the distribution of these units, considering factors such as solar exposure/radiation, views, privacy, proximity to sources of disturbance (such as noise), and minimal internal circulation areas. The model also ensures that existing vegetation and buildings on the site are preserved and incorporated into the optimization and design process. The model allows for simultaneous optimization at two scales, architectural and urban design, which have traditionally been addressed sequentially. This holistic design approach leads to individual and collective benefits, resulting in urban environments that foster a balance between ecology and architectural quality. The results of the optimization process demonstrate a seemingly random distribution of housing units that, in fact, is a densified hybrid between traditional garden settlements and allotment settlements. This urban typology is selected due to its compatibility with the surrounding urban context, although the presented methodology can be extended to other forms of urban development and density levels. The benefits of this approach are threefold. First, it allows for the determination of ideal housing distribution that optimizes solar radiation for each building density level, essentially extending the concept of sustainable building to the urban scale. Second, the method enhances living quality by considering the orientation and positioning of individual functions within each housing unit, achieving optimal views and privacy. Third, the algorithm's flexibility and robustness facilitate the efficient implementation of urban development with various stakeholders, architects, and construction companies without compromising its performance. The core of the research is the application of global and local optimization strategies to create efficient design solutions. By considering both, the performance of individual units and the collective performance of the urban aggregation, we ensure an optimal balance between private and communal benefits. By promoting a holistic understanding of urban ecology and integrating advanced optimization strategies, our methodology offers a sustainable and efficient solution to the challenges of modern urbanization.Keywords: sustainable development, self-organization, ecological performance, solar radiation and exposure, daylight, visibility, accessibility, spatial distribution, local and global optimization
Procedia PDF Downloads 6762 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 17861 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13860 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies
Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König
Abstract:
Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition
Procedia PDF Downloads 25859 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence
Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy
Abstract:
Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows
Procedia PDF Downloads 14558 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance
Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi
Abstract:
This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building
Procedia PDF Downloads 3557 Coil-Over Shock Absorbers Compared to Inherent Material Damping
Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major
Abstract:
Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.Keywords: damper structures, material damping, PDMS, TPU
Procedia PDF Downloads 11556 Heritage, Cultural Events and Promises for Better Future: Media Strategies for Attracting Tourism during the Arab Spring Uprisings
Authors: Eli Avraham
Abstract:
The Arab Spring was widely covered in the global media and the number of Western tourists traveling to the area began to fall. The goal of this study was to analyze which media strategies marketers in Middle Eastern countries chose to employ in their attempts to repair the negative image of the area in the wake of the Arab Spring. Several studies were published concerning image-restoration strategies of destinations during crises around the globe; however, these strategies were not part of an overarching theory, conceptual framework or model from the fields of crisis communication and image repair. The conceptual framework used in the current study was the ‘multi-step model for altering place image’, which offers three types of strategies: source, message and audience. Three research questions were used: 1.What public relations crisis techniques and advertising campaign components were used? 2. What media policies and relationships with the international media were adopted by Arab officials? 3. Which marketing initiatives (such as cultural and sports events) were promoted? This study is based on qualitative content analysis of four types of data: 1) advertising components (slogans, visuals and text); (2) press interviews with Middle Eastern officials and marketers; (3) official media policy adopted by government decision-maker (e.g. boycotting or arresting newspeople); and (4) marketing initiatives (e.g. organizing heritage festivals and cultural events). The data was located in three channels from December 2010, when the events started, to September 31, 2013: (1) Internet and video-sharing websites: YouTube and Middle Eastern countries' national tourism board websites; (2) News reports from two international media outlets, The New York Times and Ha’aretz; these are considered quality newspapers that focus on foreign news and tend to criticize institutions; (3) Global tourism news websites: eTurbo news and ‘Cities and countries branding’. Using the ‘multi-step model for altering place image,’ the analysis reveals that Middle Eastern marketers and officials used three kinds of strategies to repair their countries' negative image: 1. Source (cooperation and media relations; complying, threatening and blocking the media; and finding alternatives to the traditional media) 2. Message (ignoring, limiting, narrowing or reducing the scale of the crisis; acknowledging the negative effect of an event’s coverage and assuring a better future; promotion of multiple facets, exhibitions and softening the ‘hard’ image; hosting spotlight sporting and cultural events; spinning liabilities into assets; geographic dissociation from the Middle East region; ridicule the existing stereotype) and 3. Audience (changing the target audience by addressing others; emphasizing similarities and relevance to specific target audience). It appears that dealing with their image problems will continue to be a challenge for officials and marketers of Middle Eastern countries until the region stabilizes and its regional conflicts are resolved.Keywords: Arab spring, cultural events, image repair, Middle East, tourism marketing
Procedia PDF Downloads 28655 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo
Authors: Margaret Boone Rappaport, Christopher J. Corbally
Abstract:
The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.Keywords: genetic drift, genomics, parietal expansion, religious capacity
Procedia PDF Downloads 343