Search results for: engineering applications
2150 Video Games Technologies Approach for Their Use in the Classroom
Authors: Daniel Vargas-Herrera, Ivette Caldelas, Fernando Brambila-Paz, Rodrigo Montufar-Chaveznava
Abstract:
In this paper, we present the advances corresponding to the implementation of a set of educational materials based on video games technologies. Essentially these materials correspond to projects developed and under development as bachelor thesis of some Computer Engineering students of the Engineering School. All materials are based on the Unity SDK; integrating some devices such as kinect, leap motion, oculus rift, data gloves and Google cardboard. In detail, we present a virtual reality application for neurosciences students (suitable for neural rehabilitation), and virtual scenes for the Google cardboard, which will be used by the psychology students for phobias treatment. The objective is these materials will be located at a server to be available for all students, in the classroom or in the cloud, considering the use of smartphones has been widely extended between students.Keywords: virtual reality, interactive technologies, video games, educational materials
Procedia PDF Downloads 6572149 Angiopermissive Foamed and Fibrillar Scaffolds for Vascular Graft Applications
Authors: Deon Bezuidenhout
Abstract:
Pre-seeding with autologous endothelial cells improves the long-term patency of synthetic vascular grafts levels obtained with autografts, but is limited to a single centre due to resource, time and other constraints. Spontaneous in vivo endothelialization would obviate the need for pre-seeding, but has been shown to be absent in man due to limited transanastomotic and fallout healing, and the lack of transmural ingrowth due to insufficient porosity. Two types of graft scaffolds with increased interconnected porosity for improved tissue ingrowth and healing are thus proposed and described. Foam-type polyurethane (PU) scaffolds with small, medium and large, interconnected pores were made by phase inversion and spherical porogen extraction, with and without additional surface modification with covalently attached heparin and subsequent loading with and delivery of growth factors. Fibrillar scaffolds were made either by standard electrospinning using degradable PU (Degrapol®), or by dual electrospinning using non-degradable PU. The latter process involves sacrificial fibres that are co-spun with structural fibres and subsequently removed to increased porosity and pore size. Degrapol samples were subjected to in vitro degradation, and all scaffold types were evaluated in vivo for tissue ingrowth and vascularization using rat subcutaneous model. The foam scaffolds were additionally evaluated in a circulatory (rat infrarenal aortic interposition) model that allows for the grafts to be anastomotically and/or ablumenally isolated to discern and determine endothelialization mode. Foam-type grafts with large (150 µm) pores showed improved subcutaneous healing in terms of vascularization and inflammatory response over smaller pore sizes (60 and 90µm), and vascularization of the large porosity scaffolds was significantly increased by more than 70% by heparin modification alone, and by 150% to 400% when combined with growth factors. In the circulatory model, extensive transmural endothelialization (95±10% at 12 w) was achieved. Fallout healing was shown to be sporadic and limited in groups that were ablumenally isolated to prevent transmural ingrowth (16±30% wrapped vs. 80±20% control; p<0.002). Heparinization and GF delivery improved both mural vascularization and lumenal endothelialization. Degrapol electrospun scaffolds showed decrease in molecular mass and corresponding tensile strength over the first 2 weeks, but very little decrease in mass over the 4w test period. Studies on the effect of tissue ingrowth with and without concomitant degradation of the scaffolds, are being used to develop material models for the finite element modelling. In the case of the dual-spun scaffolds, the PU fibre fraction could be controlled shown to vary linearly with porosity (P = −0.18FF +93.5, r2=0.91), which in turn showed inverse linear correlation with tensile strength and elastic modulus (r2 > 0.96). Calculated compliance and burst pressures of the scaffolds increased with fibre fraction, and compliances matching the human popliteal artery (5-10 %/100 mmHg), and high burst pressures (> 2000 mmHg) could be achieved. Increasing porosity (76 to 82 and 90%) resulted in increased tissue ingrowth from 33±7 to 77±20 and 98±1% after 28d. Transmural endothelialization of highly porous foamed grafts is achievable in a circulatory model, and the enhancement of porosity and tissue ingrowth may hold the key the development of spontaneously endothelializing electrospun grafts.Keywords: electrospinning, endothelialization, porosity, scaffold, vascular graft
Procedia PDF Downloads 2962148 The Dynamic Nexus of Public Health and Journalism in Informed Societies
Authors: Ali Raza
Abstract:
The dynamic landscape of communication has brought about significant advancements that intersect with the realms of public health and journalism. This abstract explores the evolving synergy between these fields, highlighting how their intersection has contributed to informed societies and improved public health outcomes. In the digital age, communication plays a pivotal role in shaping public perception, policy formulation, and collective action. Public health, concerned with safeguarding and improving community well-being, relies on effective communication to disseminate information, encourage healthy behaviors, and mitigate health risks. Simultaneously, journalism, with its commitment to accurate and timely reporting, serves as the conduit through which health information reaches the masses. Advancements in communication technologies have revolutionized the ways in which public health information is both generated and shared. The advent of social media platforms, mobile applications, and online forums has democratized the dissemination of health-related news and insights. This democratization, however, brings challenges, such as the rapid spread of misinformation and the need for nuanced strategies to engage diverse audiences. Effective collaboration between public health professionals and journalists is pivotal in countering these challenges, ensuring that accurate information prevails. The synergy between public health and journalism is most evident during public health crises. The COVID-19 pandemic underscored the pivotal role of journalism in providing accurate and up-to-date information to the public. However, it also highlighted the importance of responsible reporting, as sensationalism and misinformation could exacerbate the crisis. Collaborative efforts between public health experts and journalists led to the amplification of preventive measures, the debunking of myths, and the promotion of evidence-based interventions. Moreover, the accessibility of information in the digital era necessitates a strategic approach to health communication. Behavioral economics and data analytics offer insights into human decision-making and allow tailored health messages to resonate more effectively with specific audiences. This approach, when integrated into journalism, enables the crafting of narratives that not only inform but also influence positive health behaviors. Ethical considerations emerge prominently in this alliance. The responsibility to balance the public's right to know with the potential consequences of sensational reporting underscores the significance of ethical journalism. Health journalists must meticulously source information from reputable experts and institutions to maintain credibility, thus fortifying the bridge between public health and the public. As both public health and journalism undergo transformative shifts, fostering collaboration between these domains becomes essential. Training programs that familiarize journalists with public health concepts and practices can enhance their capacity to report accurately and comprehensively on health issues. Likewise, public health professionals can gain insights into effective communication strategies from seasoned journalists, ensuring that health information reaches a wider audience. In conclusion, the convergence of public health and journalism, facilitated by communication advancements, is a cornerstone of informed societies. Effective communication strategies, driven by collaboration, ensure the accurate dissemination of health information and foster positive behavior change. As the world navigates complex health challenges, the continued evolution of this synergy holds the promise of healthier communities and a more engaged and educated public.Keywords: public awareness, journalism ethics, health promotion, media influence, health literacy
Procedia PDF Downloads 702147 Empowering Women Entrepreneurs in Rural India through Developing Online Communities of Purpose Using Social Technologies
Authors: Jayanta Basak, Somprakash Bandyopadhyay, Parama Bhaumik, Siuli Roy
Abstract:
To solve the life and livelihood related problems of socially and economically backward rural women in India, several Women Self-Help Groups (WSHG) are formed in Indian villages. WSHGs are micro-communities (with 10-to 15 members) within a village community. WSHGs have been conceived not just to promote savings and provide credit, but also to act as a vehicle of change through the creation of women micro-entrepreneurs at the village level. However, in spite of huge investment and volume of people involved in the whole process, the success is still limited. Most of these entrepreneurial activities happen in small household workspaces where sales are limited to the inconsistent and unpredictable local markets. As a result, these entrepreneurs are perennially trapped in the vicious cycle of low risk taking ability, low investment capacity, low productivity, weak market linkages and low revenue. Market separation including customer-producer separation is one of the key problems in this domain. Researchers suggest that there are four types of market separation: (i) spatial, (ii) financial, (iii) temporal, and (iv) informational, which in turn impacts the nature of markets and marketing. In this context, a large group of intermediaries (the 'middleman') plays important role in effectively reducing the factors that separate markets by utilizing the resource of rural entrepreneurs, their products and thus, accelerate market development. The rural entrepreneurs are heavily dependent on these middlemen for marketing of their products and these middlemen exploit rural entrepreneurs by creating a huge informational separation between the rural producers and end-consumers in the market and thus hiding the profit margins. The objective of this study is to develop a transparent, online communities of purpose among rural and urban entrepreneurs using internet and web 2.0 technologies in order to decrease market separation and improve mutual awareness of available and potential products and market demands. Communities of purpose are groups of people who have an ability to influence, can share knowledge and learn from others, and be committed to achieving a common purpose. In this study, a cluster of SHG women located in a village 'Kandi' of West Bengal, India has been studied closely for six months. These women are primarily engaged in producing garments, soft toys, fabric painting on clothes, etc. These women were equipped with internet-enabled smart-phones where they can use chat applications in local language and common social networking websites like Facebook, Instagram, etc. A few handicraft experts and micro-entrepreneurs from the city (the 'seed') were included in their mobile messaging app group that enables the creation of a 'community of purpose' in order to share thoughts and ideas on product designs, market trends, and practices, and thus decrease the rural-urban market separation. After six months of regular group interaction in mobile messaging app among these rural-urban community members, it is observed that SHG women are empowered now to share their product images, design ideas, showcase, and promote their products in global marketplace using some common social networking websites through which they can also enhance and augment their community of purpose.Keywords: communities of purpose, market separation, self-help group, social technologies
Procedia PDF Downloads 2552146 Advancement in Adhesion and Osteogenesis of Stem Cells with Histatin Coated 3D-Printed Bio-Ceramics
Authors: Haiyan Wang, Dongyun Wang, Yongyong Yan, Richard T. Jaspers, Gang Wu
Abstract:
Mesenchymal stem cell and 3D printing-based bone tissue engineering present a promising technique to repair large-volume bone defects. Its success is highly dependent on cell attachment, spreading, osteogenic differentiation, and in vivo survival of stem cells on 3D-printed scaffolds. In this study, human salivary histatin-1 (Hst1) was utilized to enhance the interactions between human adipose-derived stem cells (hASCs) and 3D-printed β-tricalcium phosphate (β-TCP) bioceramic scaffolds. Fluorescent images showed that Hst1 significantly enhanced the adhesion of hASCs to both bioinert glass and 3D-printed β-TCP scaffold. In addition, Hst1 was associated with significantly higher proliferation and osteogenic differentiation of hASCs on 3D-printed β-TCP scaffolds. Moreover, coating 3D-printed β-TCP scaffolds with histatin significantly promotes the survival of hASCs in vivo. The ERK and p38 but not JNK signaling was found to be involved in the superior adhesion of hASCs to β-TCP scaffolds with the aid of Hst1. In conclusion, Hst1 could significantly promote the adhesion, spreading, osteogenic differentiation, and in vivo survival of hASCs on 3D-printed β-TCP scaffolds, bearing a promising application in stem cell/3D printing-based constructs for bone tissue engineering.Keywords: 3d printing, adipose-derived stem cells, bone tissue engineering, histatin-1, osteogenesis
Procedia PDF Downloads 632145 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica
Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat
Abstract:
Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.Keywords: carmustine, silica, controlled, release
Procedia PDF Downloads 2642144 Theoretical Analysis of Performance Parameters of a Microchannel Heat Exchanger
Authors: Shreyas Kotian, Nishant Jainm, Nachiket Methekar
Abstract:
The increase in energy demands in various industrial sectors has called for devices small in size with high heat transfer rates. Microchannel heat exchangers (MCHX) have thus been studied and applied in various fields such as thermal engineering, aerospace engineering and nanoscale heat transfer. They have been a case of investigation due to their augmented thermal characteristics and low-pressure drop. The goal of the current investigation is to analyze the thermohydraulic performance of the heat exchanger analytically. Studies are done for various inlet conditions and flow conditions. At Thi of 90°C, the effectiveness increased by about 22% for an increase in Re from 1000 to 5000 of the cold fluid. It was also observed that at Re = 5000 for the hot fluid, the heat recovered by the hot fluid increases by about 69% for an increase in inlet temperature of the hot fluid from 50°C to 70°C.Keywords: theoretical analysis, performance parameters, microchannel heat exchanger, Reynolds number
Procedia PDF Downloads 1522143 Evaluation of Human Amnion Hemocompatibility as a Substitute for Vessels
Authors: Ghasem Yazdanpanah, Mona Kakavand, Hassan Niknejad
Abstract:
Objectives: An important issue in tissue engineering (TE) is hemocompatibility. The current engineered vessels are seriously at risk of thrombus formation and stenosis. Amnion (AM) is the innermost layer of fetal membranes that consists of epithelial and mesenchymal sides. It has the advantages of low immunogenicity, anti-inflammatory and anti-bacterial properties as well as good mechanical properties. We recently introduced the amnion as a natural biomaterial for tissue engineering. In this study, we have evaluated hemocompatibility of amnion as potential biomaterial for tissue engineering. Materials and Methods: Amnions were derived from placentas of elective caesarean deliveries which were in the gestational ages 36 to 38 weeks. Extracted amnions were washed by cold PBS to remove blood remnants. Blood samples were obtained from healthy adult volunteers who had not previously taken anti-coagulants. The blood samples were maintained in sterile tubes containing sodium citrate. Plasma or platelet rich plasma (PRP) were collected by blood sample centrifuging at 600 g for 10 min. Hemocompatibility of the AM samples (n=7) were evaluated by measuring of activated partial thromboplastin time (aPTT), prothrombin time (PT), hemolysis, and platelet aggregation tests. P-selectin was also assessed by ELISA. Both epithelial and mesenchymal sides of amnion were evaluated. Glass slide and expanded polytetrafluoroethylene (ePTFE) samples were defined as control. Results: In comparison with glass as control (13.3 ± 0.7 s), prothrombin time was increased significantly while each side of amnion was in contact with plasma (p<0.05). There was no significant difference in PT between epithelial and mesenchymal surfaces (17.4 ± 0.7 s vs. 15.8 ± 0.7 s, respectively). However, aPPT was not significantly changed after incubation of plasma with amnion epithelial and mesenchymal surfaces or glass (28.61 ± 1.39 s, 31.4 ± 2.66 s, glass, 30.76 ± 2.53 s, respectively, p>0.05). Amnion surfaces, ePTFE and glass samples have less hemolysis induction than water considerably (p<0.001), in which no differences were detected. Platelet aggregation measurements showed that platelets were less stimulated by the amnion epithelial and mesenchymal sides, in comparison with ePTFE and glass. In addition, reduction in amount of p-selectin, as platelet activation factor, after incubation of samples with PRP indicated that amnion has less stimulatory effects on platelets than ePTFE and glass. Conclusion: Amnion as a natural biomaterial has the potential to be used in tissue engineering. Our results suggest that amnion has appropriate hemocompatibility to be employed as a vascular substitute.Keywords: amnion, hemocompatibility, tissue engineering, biomaterial
Procedia PDF Downloads 3952142 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 1802141 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 2852140 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 322139 Restructuring of Embedded System Design Course: Making It Industry Compliant
Authors: Geetishree Mishra, S. Akhila
Abstract:
Embedded System Design, the most challenging course of electronics engineering has always been appreciated and well acclaimed by the students of electronics and its related branches of engineering. Embedded system, being a product of multiple application domains, necessitates skilled man power to be well designed and tested in every important aspect of both hardware and software. In the current industrial scenario, the requirements are even more rigorous and highly demanding and needs to be to be on par with the advanced technologies. Fresh engineers are expected to be thoroughly groomed by the academic system and the teaching community. Graduates with the ability to understand both complex technological processes and technical skills are increasingly sought after in today's embedded industry. So, the need of the day is to restructure the under-graduate course- both theory and lab practice along with the teaching methodologies to meet the industrial requirements. This paper focuses on the importance of such a need in the present education system.Keywords: embedded system design, industry requirement, syllabus restructuring, project-based learning, teaching methodology
Procedia PDF Downloads 6622138 Voices of Dissent: Case Study of a Digital Archive of Testimonies of Political Oppression
Authors: Andrea Scapolo, Zaya Rustamova, Arturo Matute Castro
Abstract:
The “Voices in Dissent” initiative aims at collecting and making available in a digital format, testimonies, letters, and other narratives produced by victims of political oppression from different geographical spaces across the Atlantic. By recovering silenced voices behind the official narratives, this open-access online database will provide indispensable tools for rewriting the history of authoritarian regimes from the margins as memory debates continue to provoke controversy among academic and popular transnational circles. In providing an extensive database of non-hegemonic discourses in a variety of political and social contexts, the project will complement the existing European and Latin-American studies, and invite further interdisciplinary and trans-national research. This digital resource will be available to academic communities and the general audience and will be organized geographically and chronologically. “Voices in Dissent” will offer a first comprehensive study of these personal accounts of persecution and repression against determined historical backgrounds and their impact on collective memory formation in contemporary societies. The digitalization of these texts will allow to run metadata analyses and adopt comparatist approaches for a broad range of research endeavors. Most of the testimonies included in our archive are testimonies of trauma: the trauma of exile, imprisonment, torture, humiliation, censorship. The research on trauma has now reached critical mass and offers a broad spectrum of critical perspectives. By putting together testimonies from different geographical and historical contexts, our project will provide readers and scholars with an extraordinary opportunity to investigate how culture shapes individual and collective memories and provides or denies resources to make sense and cope with the trauma. For scholars dealing with the epistemological and rhetorical analysis of testimonies, an online open-access archive will prove particularly beneficial to test theories on truth status and the formation of belief as well as to study the articulation of discourse. An important aspect of this project is also its pedagogical applications since it will contribute to the creation of Open Educational Resources (OER) to support students and educators worldwide. Through collaborations with our Library System, the archive will form part of the Digital Commons database. The texts collected in this online archive will be made available in the original languages as well as in English translation. They will be accompanied by a critical apparatus that will contextualize them historically by providing relevant background information and bibliographical references. All these materials can serve as a springboard for a broad variety of educational projects and classroom activities. They can also be used to design specific content courses or modules. In conclusion, the desirable outcomes of the “Voices in Dissent” project are: 1. the collections and digitalization of political dissent testimonies; 2. the building of a network of scholars, educators, and learners involved in the design, development, and sustainability of the digital archive; 3. the integration of the content of the archive in both research and teaching endeavors, such as publication of scholarly articles, design of new upper-level courses, and integration of the materials in existing courses.Keywords: digital archive, dissent, open educational resources, testimonies, transatlantic studies
Procedia PDF Downloads 1062137 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 3882136 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 622135 La0.80Ag0.15MnO3 Magnetic Nanoparticles for Self-Controlled Magnetic Fluid Hyperthermia
Authors: Marian Mihalik, Kornel Csach, Martin Kovalik, Matúš Mihalik, Martina Kubovčíková, Maria Zentková, Martin Vavra, Vladimír Girman, Jaroslav Briančin, Marija Perovic, Marija Boškovic, Magdalena Fitta, Robert Pelka
Abstract:
Current nanomaterials for use in biomedicine are based mainly on iron oxides and on present knowledge on magnetic nanostructures. Manganites can represent another material which can be used optionally. Manganites and their unique electronic properties have been extensively studied in the last decades not only due to fundamental interest but to possible applications of colossal magnetoresistance, magnetocaloric effect, and ferroelectric properties. It was found that the oxygen-reduction reaction on perovskite oxide is intimately connected with metal ion e.g., orbital occupation. The effect of oxygen deviation from the stoichiometric composition on crystal structure was studied very carefully by many authors on LaMnO₃. Depending on oxygen content, the crystal structure changes from orthorhombic one to rhombohedric for oxygen content 3.1. In the case of hole-doped manganites, the change from the orthorhombic crystal structure, which is typical for La1-xCaxMnO3 based manganites, to the rhombohedric crystal structure (La1-xMxMnO₃ where M = K, Ag, and Sr based materials) results in an enormous increase of the Curie temperature. In our paper, we study the effect of oxygen content on crystal structure, thermal, and magnetic properties (including magnetocaloric effect) of La1-xAgxMnO₃nano particle system. The content of oxygen in samples was tuned by heat treatment in different thermal regimes and in various environment (air, oxygen, argon). Water nanosuspensions based on La0.80Ag0.15MnO₃ magnetic particles with the Curie temperature of about 43oC were prepared by two different approaches. First, by using a laboratory circulation mill for milling of powder in the presence of sodium dodecyl sulphate (SDS) and subsequent centrifugation. Second nanosuspension was prepared using an agate bowl, etching in citric acid and HNO3, ultrasound homogeniser, centrifugation, and dextran 40 kDA or 15 kDA as surfactant. Electrostatic stabilisation obtained by the first approach did not offer long term kinetic and aggregation colloidal stability and was unable to compensate for attractive forces between particles under a magnetic field. By the second approach, we prepared suspension oversaturated by dextran 40 kDA for steric stabilisation, with evidence of the presence of superparamagnetic behaviour. Low concentration of nanoparticles and not ideal coverage of nanoparticles impacting the stability of ferrofluids was the disadvantage of this approach. Strong steric stabilisation was observable at alcaic conditions under pH = ~10. Application of dextran 15 kDA leads to relatively stable ferrofluid with pH around physiological conditions, but desegregation of powder by HNO₃ was not effective enough, and the average size of fragments was to large of about 150 nm, and we did not see any signature of superparamagnetic behaviour. The prepared ferrofluids were characterised by scanning and transition microscope method, thermogravimetry, magnetization, and AC susceptibility measurements. Specific Absorption Rate measurements were undertaken on powder as well on ferrofluids in order to estimate the potential application of La₀.₈₀Ag₀.₁₅MnO₃ magnetic particles based ferrofluid for hyperthermia. Our complex study contains an investigation of biocompatibility and potential biohazard of this material.Keywords: manganites, magnetic nanoparticles, oxygen content, magnetic phase transition, magnetocaloric effect, ferrofluid, hyperthermia
Procedia PDF Downloads 902134 A Model-Driven Approach of User Interface for MVP Rich Internet Application
Authors: Sarra Roubi, Mohammed Erramdani, Samir Mbarki
Abstract:
This paper presents an approach for the model-driven generating of Rich Internet Application (RIA) focusing on the graphical aspect. We used well known Model-Driven Engineering (MDE) frameworks and technologies, such as Eclipse Modeling Framework (EMF), Graphical Modeling Framework (GMF), Query View Transformation (QVTo) and Acceleo to enable the design and the code automatic generation of the RIA. During the development of the approach, we focused on the graphical aspect of the application in terms of interfaces while opting for the Model View Presenter pattern that is designed for graphics interfaces. The paper describes the process followed to define the approach, the supporting tool and presents the results from a case study.Keywords: metamodel, model-driven engineering, MVP, rich internet application, transformation, user interface
Procedia PDF Downloads 3532133 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis
Authors: Sylvie Michel, François Cocula
Abstract:
Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.Keywords: adoption, digital agriculture, e-commerce, TOE framework
Procedia PDF Downloads 602132 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 1332131 Pitfalls and Drawbacks in Visual Modelling of Learning Knowledge by Students
Authors: Tatyana Gavrilova, Vadim Onufriev
Abstract:
Knowledge-based systems’ design requires the developer’s owning the advanced analytical skills. The efficient development of that skills within university courses needs a deep understanding of main pitfalls and drawbacks, which students usually make during their analytical work in form of visual modeling. Thus, it was necessary to hold an analysis of 5-th year students’ learning exercises within courses of 'Intelligent systems' and 'Knowledge engineering' in Saint-Petersburg Polytechnic University. The analysis shows that both lack of system thinking skills and methodological mistakes in course design cause the errors that are discussed in the paper. The conclusion contains an exploration of the issues and topics necessary and sufficient for the implementation of the improved practices in educational design for future curricula of teaching programs.Keywords: knowledge based systems, knowledge engineering, students’ errors, visual modeling
Procedia PDF Downloads 3112130 Regulatory and Economic Challenges of AI Integration in Cyber Insurance
Authors: Shreyas Kumar, Mili Shangari
Abstract:
Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware
Procedia PDF Downloads 332129 Implementing Quality Function Deployment Tool for a Customer Driven New Product Development in a Kuwait SME
Authors: Asma AlQahtani, Jumana AlHadad, Maryam AlQallaf, Shoug AlHasan
Abstract:
New product development (NPD) is the complete process of bringing a new product to the customer by integrating the two broad divisions; one involving the idea generation, product design and detail engineering; and the other involving market research and marketing analysis. It is a common practice for companies to undertake some of these tasks simultaneously (concurrent engineering) and also consider them as an ongoing process (continuous development). The current study explores the framework and methodology for a new product development process utilizing the Quality Function Deployment (QFD) tool for bringing the customer opinion into the product development process. An elaborate customer survey with focus groups in the region was carried out to ensure that customer requirements are integrated into new products as early as the design stage including identifying the recognition of need for the new product. A QFD Matrix (House of Quality) was prepared that links customer requirements to product engineering requirements and a feasibility study and risk assessment exercise was carried out for a Small and Medium Enterprise (SME) in Kuwait for development of the new product. SMEs in Kuwait, particularly in manufacturing sector are mainly focused on serving the local demand, and often lack of product quality adversely affects the ability of the companies to compete on a regional/global basis. Further, lack of focus on identifying customer requirements often deters SMEs to envisage the idea of a New Product Development. The current study therefore focuses in utilizing QFD Matrix right from the conceptual design to detail design and to some extent, extending the link this to design of the manufacturing system. The outcome of the project resulted in a development of the prototype for a new molded product which can ensure consistency between the customer’s requirements and the measurable characteristics of the product. The Engineering Economics and Cost studies were also undertaken to analyse the viability of the new product, the results of which was also linked to the successful implementation of the initial QFD Matrix.Keywords: Quality Function Deployment, QFD Matrix, new product development, NPD, Kuwait SMEs, prototype development
Procedia PDF Downloads 4142128 Learning Outcomes Alignment across Engineering Core Courses
Authors: A. Bouabid, B. Bielenberg, S. Ainane, N. Pasha
Abstract:
In this paper, a team of faculty members of the Petroleum Institute in Abu Dhabi, UAE representing six different courses across General Engineering (ENGR), Communication (COMM), and Design (STPS) worked together to establish a clear developmental progression of learning outcomes and performance indicators for targeted knowledge, areas of competency, and skills for the first three semesters of the Bachelor of Sciences in Engineering curriculum. The sequences of courses studied in this project were ENGR/COMM, COMM/STPS, and ENGR/STPS. For each course’s nine areas of knowledge, competency, and skills, the research team reviewed the existing learning outcomes and related performance indicators with a focus on identifying linkages across disciplines as well as within the courses of a discipline. The team reviewed existing performance indicators for developmental progression from semester to semester for same discipline related courses (vertical alignment) and for different discipline courses within the same semester (horizontal alignment). The results of this work have led to recommendations for modifications of the initial indicators when incoherence was identified, and/or for new indicators based on best practices (identified through literature searches) when gaps were identified. It also led to recommendations for modifications of the level of emphasis within each course to ensure developmental progression. The exercise has led to a revised Sequence Performance Indicator Mapping for the knowledge, skills, and competencies across the six core courses.Keywords: curriculum alignment, horizontal and vertical progression, performance indicators, skill level
Procedia PDF Downloads 2222127 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments
Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy
Abstract:
Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing
Procedia PDF Downloads 2792126 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 632125 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 3932124 Active Learning Techniques in Engineering Education
Authors: H. M. Anitha, Anusha N. Rao
Abstract:
The current developments in technology and ideas have given entirely new dimensions to the field of research and education. New delivery methods are proposed which is an added feature to the engineering education. Particularly, more importance is given to new teaching practices such as Information and Communication Technologies (ICT). It is vital to adopt the new ICT methods which lead to the emergence of novel structure and mode of education. The flipped classroom, think pair share and peer instruction are the latest pedagogical methods which give students to learn the course. This involves students to watch video lectures outside the classroom and solve the problems at home. Students are engaged in group discussions in the classroom. These are the active learning methods wherein the students are involved diversely to learn the course. This paper gives a comprehensive study of past and present research which is going on with flipped classroom, thinks pair share activity and peer instruction.Keywords: flipped classroom, think pair share, peer instruction, active learning
Procedia PDF Downloads 3862123 Implementation of Autologous Adipose Graft from the Abdomen for Complete Fat Pad Loss of the Heel Following a Traumatic Open Fracture Secondary to a Motor Vehicle Accident: A Case Study
Authors: Ahmad Saad, Shuja Abbas, Breanna Marine
Abstract:
Introduction: This study explores the potential applications of autologous pedal fat pad grafting as a minimally invasive therapeutic strategy for addressing pedal fat pad loss. Without adequate shock absorbing tissue, a patient can experience functional deficits, ulcerations, loss of quality of life, and significant limitations with ambulation. This study details a novel technique involving autologous adipose grafting from the abdomen to enhance plantar fat pad thickness in a patient involved in a severe motor vehicle accident which resulted in total fat pad loss of the heel. Autologous adipose grafting (AAG) was used following adipose allografting in an effort to recreate a normal shock absorbing surface to allow return to activities of daily living and painless ambulation. Methods: A 46-year-old male sustained multiple open pedal fractures and necrosis to the heel fat pad after a motorcycle accident, which resulted in complete loss of the calcaneal fat pad. The patient underwent serial debridement’s, utilization of wound vac therapy and split thickness skin grafting to accomplish complete closure, despite complete loss of adipose to area. Patient presented with complaints of pain on ambulation, inability to bear weight on the heel, recurrent ulcerations, admitted had not been ambulating for two years. Clinical exam demonstrated complete loss of the plantar fat pad with a thin layer of epithelial tissue overlying the calcaneal bone, allowing visibility of the osseous contour of the calcaneus. Scar tissue had formed in place of the fat pad, with thickened epithelial tissue extending from the midfoot to the calcaneus. After conservative measures were exhausted, the patient opted for initial management by adipose allograft matrix (AAM) injections. Post operative X-ray imaging revealed noticeable improvement in calcaneal fat pad thickness. At 1 year follow up, the patient was able to ambulate without assistive devices. The fat pad at this point was significantly thicker than it was pre-operatively, but the thickness did not restore to pre-accident thickness. In order to compare the take of allograft versus autografting of adipose tissue, the decision to use adipose autograft through abdominal liposuction harvesting was deemed suitable. A general surgeon completed harvesting of adipose cells from the patient’s abdomen via liposuction, and a podiatric surgeon performed the AAG injection into the heel. Total of 15 cc’s of autologous adipose tissue injected to the calcaneus. Results: There was a visual increase in the calcaneal fat pad thickness both clinically and radiographically. At the 6-week follow up, imaging revealed retention of the calcaneal fat pad thickness. Three months postop, patient returned to activities of daily living and increased quality of life due to their increased ability to ambulate. Discussion: AAG is a novel treatment for pedal fat pad loss. These treatments may be viable and reproducible therapeutic choices for patients suffering from fat pad atrophy, fat pad loss, and/or plantar ulcerations. Both treatments of AAM and AAG exhibited similar therapeutic results by providing pain relief for ambulation and allowing for patients to return to their quality of life.Keywords: podiatry, wound, adipose, allograft, autograft, wound care, limb reconstruction, injection, limb salvage
Procedia PDF Downloads 822122 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 762121 Poly(L-Lactic Acid) Scaffolds for Bone Tissue Engineering
Authors: Aleksandra BužArovska, Gordana Bogoeva Gaceva
Abstract:
Biodegradable polymers have received significant scientific attention in tissue engineering (TE) application, in particular their composites consisting of inorganic nanoparticles. In the last 15 years, they are subject of intensive research by many groups, aiming to develop polymer scaffolds with defined biodegradability, porosity and adequate mechanical stability. The most important characteristic making these materials attractive for TE is their biodegradability, a process that could be time controlled and long enough to enable generation of a new tissue as a replacement for the degraded polymer scaffold. In this work poly(L-lactic acid) scaffolds, filled with TiO2 nanoparticles functionalized with oleic acid, have been prepared by thermally induced phase separation method (TIPS). The functionalization of TiO2 nanoparticles with oleic acid was performed in order to improve the nanoparticles dispersibility within the polymer matrix and at the same time to inhibit the cytotoxicity of the nanofiller. The oleic acid was chosen as amphiphilic molecule belonging to the fatty acid family because of its non-toxicity and possibility for mediation between the hydrophilic TiO2 nanoparticles and hydrophobic PLA matrix. The produced scaffolds were characterized with thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), scanning electron microscopy (SEM) and mechanical compression measurements. The bioactivity for bone tissue engineering application was tested in supersaturated simulated body fluid. The degradation process was followed by Fourier transform infrared spectroscopy (FTIR). The results showed anisotropic morphology with elongated open pores (100 µm), high porosity (around 92%) and perfectly dispersed nanofiller. The compression moduli up to 10 MPa were identified independent on the nanofiller content. Functionalized TiO2 nanoparticles induced formation of hydroxyapatite clusters as much as unfunctionalized TiO2. The prepared scaffolds showed properties ideal for scaffold vascularization, cell attachment, growth and proliferation.Keywords: biodegradation, bone tissue engineering, mineralization, PLA scaffolds
Procedia PDF Downloads 269