Search results for: digital inclusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4149

Search results for: digital inclusion

219 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company

Authors: Milica Lilic, Marina Rosales Martínez

Abstract:

Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.

Keywords: good practice, knowledge transfer, a spin-off company, university

Procedia PDF Downloads 133
218 Study of Drape and Seam Strength of Fabric and Garment in Relation to Weave Design and Comparison of 2D and 3D Drape Properties

Authors: Shagufta Riaz, Ayesha Younus, Munir Ashraf, Tanveer Hussain

Abstract:

Aesthetic and performance are two most important considerations along with quality, durability, comfort and cost that affect the garment credibility. Fabric drape is perhaps the most important clothing characteristics that distinguishes fabric from the sheet, paper, steel or other film materials. It enables the fabric to mold itself under its own weight into desired and required shape when only part of it is directly sustained. The fabric has the ability to be crumpled charmingly in bent folds of single or double curvature due to its drapeability to produce a smooth flowing i.e. ‘the sinusoidal-type folds of a curtain or skirt’. Drape and seam strength are two parameters that are considered for aesthetic and performance of fabric for both apparel and home textiles. Until recently, no such study have been conducted in which effect of weave designs on drape and seam strength of fabric and garment is inspected. Therefore, the aim of this study was to measure seam strength and drape of fabric and garment objectively by changing weave designs and quality of the fabric. Also, the comparison of 2-D drape and 3-D drape was done to find whether a fabric behaves in same manner or differently when sewn and worn on the body. Four different cotton weave designs were developed and pr-treatment was done. 2-D Drape of the fabric was measured by drapemeter attached with digital camera and a supporting disc to hang the specimen on it. Drape coefficient value (DC %) has negative relation with drape. It is the ratio of draped sample’s projected shadow area to the area of undraped (flat) sample expressed as percentage. Similarly, 3-D drape was measured by hanging the A-line skirts for developed weave designs. BS 3356 standard test method was followed for bending length examination. It is related to the angle that the fabric makes with its horizontal axis. Seam strength was determined by following ASTM test standard. For sewn fabric, stitch density of seam was found by magnifying glass according to standard ASTM test method. In this research study, from the experimentation and evaluation it was investigated that drape and seam strength were significantly affected by change of weave design and quality of fabric (PPI & yarn count). Drapeability increased as the number of interlacement or contact point deceased between warp and weft yarns. As the weight of fabric, bending length, and density of fabric had indirect relationship with drapeability. We had concluded that 2-D drape was higher than 3-D drape even though the garment was made of the same fabric construction. Seam breakage strength decreased with decrease in picks density and yarn count.

Keywords: drape coefficient, fabric, seam strength, weave

Procedia PDF Downloads 250
217 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 173
216 Finite Element Analysis of a Glass Facades Supported by Pre-Tensioned Cable Trusses

Authors: Khair Al-Deen Bsisu, Osama Mahmoud Abuzeid

Abstract:

Significant technological advances have been achieved in the design and building construction of steel and glass in the last two decades. The metal glass support frame has been replaced by further sophisticated technological solutions, for example, the point fixed glazing systems. The minimization of the visual mass has reached extensive possibilities through the evolution of technology in glass production and the better understanding of the structural potential of glass itself, the technological development of bolted fixings, the introduction of the glazing support attachments of the glass suspension systems and the use for structural stabilization of cables that reduce to a minimum the amount of metal used. The variability of solutions of tension structures, allied to the difficulties related to geometric and material non-linear behavior, usually overrules the use of analytical solutions, letting numerical analysis as the only general approach to the design and analysis of tension structures. With the characteristics of low stiffness, lightweight, and small damping, tension structures are obviously geometrically nonlinear. In fact, analysis of cable truss is not only one of the most difficult nonlinear analyses because the analysis path may have rigid-body modes, but also a time consuming procedure. Non-linear theory allowing for large deflections is used. The flexibility of supporting members was observed to influence the stresses in the pane considerably in some cases. No other class of architectural structural systems is as dependent upon the use of digital computers as are tensile structures. Besides complexity, the process of design and analysis of tension structures presents a series of specificities, which usually lead to the use of special purpose programs, instead of general purpose programs (GPPs), such as ANSYS. In a special purpose program, part of the design know how is embedded in program routines. It is very probable that this type of program will be the option of the final user, in design offices. GPPs offer a range of types of analyses and modeling options. Besides, traditional GPPs are constantly being tested by a large number of users, and are updated according to their actual demands. This work discusses the use of ANSYS for the analysis and design of tension structures, such as cable truss structures under wind and gravity loadings. A model to describe the glass panels working in coordination with the cable truss was proposed. Under the proposed model, a FEM model of the glass panels working in coordination with the cable truss was established.

Keywords: Glass Construction material, Facades, Finite Element, Pre-Tensioned Cable Truss

Procedia PDF Downloads 266
215 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 94
214 Family Photos as Catalysts for Writing: A Pedagogical Exercise in Visual Analysis with MA Students

Authors: Susana Barreto

Abstract:

This paper explores a pedagogical exercise that employs family photos as catalysts for teaching visual analysis and inspiring academic writing among MA students. The study aimed to achieve two primary objectives: to impart students with the skills of analyzing images or artifacts and to ignite their writing for research purposes. Conducted at Viana Polytechnic in Portugal, the exercise involved two classes on Arts Management and Art Education Master course comprising approximately twenty students from diverse academic backgrounds, including Economics, Design, Fine Arts, and Sociology, among others. The exploratory exercise involved selecting an old family photo, analyzing its content and context, and deconstructing the chosen images in an intuitive and systematic manner. Students were encouraged to engage in photo elicitation, seeking insights from family/friends to gain multigenerational perspectives on the images. The feedback received from this exercise was consistently positive, largely due to the personal connection students felt with the objects of analysis. Family photos, with their emotional significance, fostered deeper engagement and motivation in the learning process. Furthermore, visual analysing family photos stimulated critical thinking as students interpreted the composition, subject matter, and potential meanings embedded in the images. This practice enhanced their ability to comprehend complex visual representations and construct compelling visual narratives, thereby facilitating the writing process. The exercise also facilitated the identification of patterns, similarities, and differences by comparing different family photos, leading to a more comprehensive analysis of visual elements and themes. Throughout the exercise, students found analyzing their own photographs both enjoyable and insightful. They progressed through preliminary analysis, explored content and context, and artfully interwove these components. Additionally, students experimented with various techniques such as converting photos to black and white, altering framing angles, and adjusting sizes to unveil hidden meanings.The methodology employed included observation, documental analysis of written reports, and student interviews. By including students from diverse academic backgrounds, the study enhanced its external validity, enabling a broader range of perspectives and insights during the exercise. Furthermore, encouraging students to seek multigenerational perspectives from family and friends added depth to the analysis, enriching the learning experience and broadening the understanding of the cultural and historical context associated with the family photos Highlighting the emotional significance of these family photos and the personal connection students felt with the objects of analysis fosters a deeper connection to the subject matter. Moreover, the emphasis on stimulating critical thinking through the analysis of composition, subject matter, and potential meanings in family photos suggests a targeted approach to developing analytical skills. This improvement focuses specifically on critical thinking and visual analysis, enhancing the overall quality of the exercise. Additionally, the inclusion of a step where students compare different family photos to identify patterns, similarities, and differences further enhances the depth of the analysis. This comparative approach adds a layer of complexity to the exercise, ultimately leading to a more comprehensive understanding of visual elements and themes. The expected results of this study will culminate in a set of practical recommendations for implementing this exercise in academic settings.

Keywords: visual analysis, academic writing, pedagogical exercise, family photos

Procedia PDF Downloads 47
213 Kanga Traditional Costume as a Tool for Community Empowerment in Tanzania in Ubuntu perspective - A Literature Review

Authors: Meinrad Haule Lembuka

Abstract:

Introduction: Ubuntu culture represents African humanism with collective and positive feeling of people living together, interdependence, equally and peaceful etc. Overtime, Ubuntu culture developed varieties of communicative strategies to express experiences, feelings and knowledge. Khanga or kanga (garment) is among the Ubuntu cultural practice of Bantu speaking people along the East African coast following interaction with Arabs and Bantu speaking people to formulate Swahili culture. Kanga or Kanga is a Swahili word which means a traditional soft cotton cloths in varieties of colours, patterns, and styles which as a deep cultural, historical, and social significance not only in Tanzania but the rest of East African coast. Swahili culture is a sub culture of Ubuntu African culture which is rich in customs and rituals that serve to preserve goodness and life where Tanzania, like the rest of East African societies along the Indian coast engaged in kanga dressing custom under Swahili culture to express their feelings and knowledge sharing. After the independence of Tanzania (formerly Tanganyika) from British colonial rule, Kanga traditional dressing gained momentum in Swahili culture and spread to the rest of East Africa and beyond. To date kanga dressing holds a good position as a formal and informal tool for advocating marginalised groups, counselling, psychosocial therapy, liberation, compassion, love, justice, campaign, and cerebration etc. Methodology: A literature review method was guided by Ubuntu theory to assess the implications of kanga traditional dressing in empowering Tanzanian community. Findings: During slavery, slaves wore Kaniki and people despised Kaniki dressing due to its association with slavery. Ex-slave women seeking to become part of the Swahili society began to decorate their Kaniki clothes. After slavery was abolished in 1897, Kangas began to be used for self-empowerment and to indicate that the wearer had personal wealth. During colonial era, freedom of expressions for Africans were restricted by colonial masters thus Tanzanians used kanga to express the evils of colonialism and other social problems, Under Ubuntu value of unity and solidarity liberation and independence fighters crafted motto and liberation messages that were shared and spread rapidly in the community. Political parities like TANU used kanga to spread nationalism and Ujamaa policy. kanga is more than a piece of fabric-it is a space for women to voice unspeakable communication and a women-centred repository for indigenous knowledge, feminisms addressing social ills, happiness, campaigns, memories and reconciliation etc. Kanga provides an indirect voice and support vulnerable and marginalised populations and strongly it has proved to be a peaceful platform of capture attention of government and societies. Kanga textiles gained increased international fame when an Obama kanga design was produced upon the president’s election in 2008 and his visit to Tanzania in 2013. Conclusion: Kanga preserves and symbolises Swahili culture and contributes in realization of social justice, inclusion, national identity and unity. As an inclusive cultural tool, Kanga spread across Africa to international community and the practice has moved from being a woman domination dressing code to other sex orientations.

Keywords: African culture, Kanga, khanga, swahili culture, ubuntu

Procedia PDF Downloads 54
212 Quantification of Lawsone and Adulterants in Commercial Henna Products

Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen

Abstract:

The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.

Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone

Procedia PDF Downloads 448
211 Mental Health Monitoring System as an Effort for Prevention and Handling of Psychological Problems in Students

Authors: Arif Tri Setyanto, Aditya Nanda Priyatama, Nugraha Arif Karyanta, Fadjri Kirana A., Afia Fitriani, Rini Setyowati, Moh.Abdul Hakim

Abstract:

The Basic Health Research Report by the Ministry of Health (2018) shows an increase in the prevalence of mental health disorders in the adolescent and early adult age ranges. Supporting this finding, data on the psychological examination of the student health service unit at one State University recorded 115 cases of moderate and severe health problems in the period 2016 - 2019. More specifically, the highest number of cases was experienced by clients in the age range of 21-23 years or equivalent, with the mid-semester stage towards the end. Based on the distribution of cases experienced and the disorder becomes a psychological problem experienced by students. A total of 29% or the equivalent of 33 students experienced anxiety disorders, 25% or 29 students experienced problems ranging from mild to severe, as well as other classifications of disorders experienced, including adjustment disorders, family problems, academics, mood disorders, self-concept disorders, personality disorders, cognitive disorders, and others such as trauma and sexual disorders. Various mental health disorders have a significant impact on the academic life of students, such as low GPA, exceeding the limit in college, dropping out, disruption of social life on campus, to suicide. Based on literature reviews and best practices from universities in various countries, one of the effective ways to prevent and treat student mental health disorders is to implement a mental health monitoring system in universities. This study uses a participatory action research approach, with a sample of 423 from a total population of 32,112 students. The scale used in this study is the Beck Depression Inventory (BDI) to measure depression and the Taylor Minnesota Anxiety Scale (TMAS) to measure anxiety levels. This study aims to (1) develop a digital-based health monitoring system for students' mental health situations in the mental health category. , dangers, or those who have mental disorders, especially indications of symptoms of depression and anxiety disorders, and (2) implementing a mental health monitoring system in universities at the beginning and end of each semester. The results of the analysis show that from 423 respondents, the main problems faced by all coursework, such as thesis and academic assignments. Based on the scoring and categorization of the Beck Depression Inventory (BDI), 191 students experienced symptoms of depression. A total of 24.35%, or 103 students experienced mild depression, 14.42% (61 students) had moderate depression, and 6.38% (27 students) experienced severe or extreme depression. Furthermore, as many as 80.38% (340 students) experienced anxiety in the high category. This article will review this review of the student mental health service system on campus.

Keywords: monitoring system, mental health, psychological problems, students

Procedia PDF Downloads 104
210 Advancing Trustworthy Human-robot Collaboration: Challenges and Opportunities in Diverse European Industrial Settings

Authors: Margarida Porfírio Tomás, Paula Pereira, José Manuel Palma Oliveira

Abstract:

The decline in employment rates across sectors like industry and construction is exacerbated by an aging workforce. This has far-reaching implications for the economy, including skills gaps, labour shortages, productivity challenges due to physical limitations, and workplace safety concerns. To sustain the workforce and pension systems, technology plays a pivotal role. Robots provide valuable support to human workers, and effective human-robot interaction is essential. FORTIS, a Horizon project, aims to address these challenges by creating a comprehensive Human-Robot Interaction (HRI) solution. This solution focuses on multi-modal communication and multi-aspect interaction, with a primary goal of maintaining a human-centric approach. By meeting the needs of both human workers and robots, FORTIS aims to facilitate efficient and safe collaboration. The project encompasses three key activities: 1) A Human-Centric Approach involving data collection, annotation, understanding human behavioural cognition, and contextual human-robot information exchange. 2) A Robotic-Centric Focus addressing the unique requirements of robots during the perception and evaluation of human behaviour. 3) Ensuring Human-Robot Trustworthiness through measures such as human-robot digital twins, safety protocols, and resource allocation. Factor Social, a project partner, will analyse psycho-physiological signals that influence human factors, particularly in hazardous working conditions. The analysis will be conducted using a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. However, the adoption of novel technologies, particularly those involving human-robot interaction, often faces hurdles related to acceptance. To address this challenge, FORTIS will draw upon insights from Social Sciences and Humanities (SSH), including risk perception and technology acceptance models. Throughout its lifecycle, FORTIS will uphold a human-centric approach, leveraging SSH methodologies to inform the design and development of solutions. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No 101135707 (FORTIS).

Keywords: skills gaps, productivity challenges, workplace safety, human-robot interaction, human-centric approach, social sciences and humanities, risk perception

Procedia PDF Downloads 35
209 Explore and Reduce the Performance Gap between Building Modelling Simulations and the Real World: Case Study

Authors: B. Salehi, D. Andrews, I. Chaer, A. Gillich, A. Chalk, D. Bush

Abstract:

With the rapid increase of energy consumption in buildings in recent years, especially with the rise in population and growing economies, the importance of energy savings in buildings becomes more critical. One of the key factors in ensuring energy consumption is controlled and kept at a minimum is to utilise building energy modelling at the very early stages of the design. So, building modelling and simulation is a growing discipline. During the design phase of construction, modelling software can be used to estimate a building’s projected energy consumption, as well as building performance. The growth in the use of building modelling software packages opens the door for improvements in the design and also in the modelling itself by introducing novel methods such as building information modelling-based software packages which promote conventional building energy modelling into the digital building design process. To understand the most effective implementation tools, research projects undertaken should include elements of real-world experiments and not just rely on theoretical and simulated approaches. Upon review of the related studies undertaken, it’s evident that they are mostly based on modelling and simulation, which can be due to various reasons such as the more expensive and time-consuming nature of real-time data-based studies. Taking in to account the recent rise of building energy software modelling packages and the increasing number of studies utilising these methods in their projects and research, the accuracy and reliability of these modelling software packages has become even more crucial and critical. This Energy Performance Gap refers to the discrepancy between the predicted energy savings and the realised actual savings, especially after buildings implement energy-efficient technologies. There are many different software packages available which are either free or have commercial versions. In this study, IES VE (Integrated Environmental Solutions Virtual Environment) is used as it is a common Building Energy Modeling and Simulation software in the UK. This paper describes a study that compares real time results with those in a virtual model to illustrate this gap. The subject of the study is a north west facing north-west (345°) facing, naturally ventilated, conservatory within a domestic building in London is monitored during summer to capture real-time data. Then these results are compared to the virtual results of IES VE, which is a commonly used building energy modelling and simulation software in the UK. In this project, the effect of the wrong position of blinds on overheating is studied as well as providing new evidence of Performance Gap. Furthermore, the challenges of drawing the input of solar shading products in IES VE will be considered.

Keywords: building energy modelling and simulation, integrated environmental solutions virtual environment, IES VE, performance gap, real time data, solar shading products

Procedia PDF Downloads 129
208 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 225
207 Observing Teaching Practices Through the Lenses of Self-Regulated Learning: A Study Within the String Instrument Individual Context

Authors: Marija Mihajlovic Pereira

Abstract:

Teaching and learning a musical instrument is challenging for both teachers and students. Teachers generally use diverse strategies to resolve students' particular issues in a one-to-one context. Considering individual sessions as a supportive educational context, the teacher can play a decisive role in stimulating and promoting self-regulated learning strategies, especially with beginning learners. The teachers who promote self-controlling behaviors, strategic monitoring, and regulation of actions toward goals could expect their students to practice more qualitatively and consciously. When encouraged to adopt self-regulation habits, students' could benefit from greater productivity on a longer path. Founded on Bary Zimmerman's cyclical model that comprehends three phases - forethought, performance, and self-reflection, this work aims to articulate self-regulated and music learning. Self-regulated learning appeals to the individual's attitude in planning, controlling, and reflecting on their performance. Furthermore, this study aimed to present an observation grid for perceiving teaching instructions that encourage students' controlling cognitive behaviors in light of the belief that conscious promotion of self-regulation may motivate strategic actions toward goals in musical performance. The participants, two teachers, and two students have been involved in the social inclusion project in Lisbon (Portugal). The author and one independent inter-observer analyzed six video-recorded string instrument lessons. The data correspond to three sessions per teacher lectured to one (different) student. Violin (f) and violoncello (m) teachers hold a Master's degree in music education and approximately five years of experience. In their second year of learning an instrument, students have acquired reasonable skills in musical reading, posture, and sound quality until then. The students also manifest positive learning behaviors, interest in learning a musical instrument, although their study habits are still inconsistent. According to the grid's four categories (parent codes), in-class rehearsal frames were coded using MaxQda software, version 20, according to the grid's four categories (parent codes): self-regulated learning, teaching verbalizations, teaching strategies, and students' in-class performance. As a result, selected rehearsal frames qualitatively describe teaching instructions that might promote students' body and hearing awareness, such as "close the eyes while playing" or "sing to internalize the pitch." Another analysis type, coding the short video events according to the observation grid's subcategories (child codes), made it possible to perceive the time teachers dedicate to specific verbal or non-verbal strategies. Furthermore, a coding overlay analysis indicated that teachers tend to stimulate. (i) Forethought – explain tasks, offer feedback and ensure that students identify a goal, (ii) Performance – teach study strategies and encourage students to sing and use vocal abilities to ensure inner audition, (iii) Self-reflection – frequent inquiring and encouraging the student to verbalize their perception of performance. Although developed in the context of individual string instrument lessons, this classroom observation grid brings together essential variables in a one-to-one lesson. It may find utility in a broader context of music education due to the possibility to organize, observe and evaluate teaching practices. Besides that, this study contributes to cognitive development by suggesting a practical approach to fostering self-regulated learning.

Keywords: music education, observation grid, self-regulated learning, string instruments, teaching practices

Procedia PDF Downloads 90
206 Telemedicine Services in Ophthalmology: A Review of Studies

Authors: Nasim Hashemi, Abbas Sheikhtaheri

Abstract:

Telemedicine is the use of telecommunication and information technologies to provide health care services that would often not be consistently available in distant rural communities to people at these remote areas. Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Thus, teleophthalmology can overcome geographical barriers and improve quality, access, and affordability of eye health care services. Since teleophthalmology has been widespread applied in recent years, the aim of this study was to determine the different applications of teleophthalmology in the world. To this end, three bibliographic databases (Medline, ScienceDirect, Scopus) were comprehensively searched with these keywords: eye care, eye health care, primary eye care, diagnosis, detection, and screening of different eye diseases in conjunction with telemedicine, telehealth, teleophthalmology, e-services, and information technology. All types of papers were included in the study with no time restriction. We conducted the search strategies until 2015. Finally 70 articles were surveyed. We classified the results based on the’type of eye problems covered’ and ‘the type of telemedicine services’. Based on the review, from the ‘perspective of health care levels’, there are three level for eye health care as primary, secondary and tertiary eye care. From the ‘perspective of eye care services’, the main application of teleophthalmology in primary eye care was related to the diagnosis of different eye diseases such as diabetic retinopathy, macular edema, strabismus and aged related macular degeneration. The main application of teleophthalmology in secondary and tertiary eye care was related to the screening of eye problems i.e. diabetic retinopathy, astigmatism, glaucoma screening. Teleconsultation between health care providers and ophthalmologists and also education and training sessions for patients were other types of teleophthalmology in world. Real time, store–forward and hybrid methods were the main forms of the communication from the perspective of ‘teleophthalmology mode’ which is used based on IT infrastructure between sending and receiving centers. In aspect of specialists, early detection of serious aged-related ophthalmic disease in population, screening of eye disease processes, consultation in an emergency cases and comprehensive eye examination were the most important benefits of teleophthalmology. Cost-effectiveness of teleophthalmology projects resulted from reducing transportation and accommodation cost, access to affordable eye care services and receiving specialist opinions were also the main advantages of teleophthalmology for patients. Teleophthalmology brings valuable secondary and tertiary care to remote areas. So, applying teleophthalmology for detection, treatment and screening purposes and expanding its use in new applications such as eye surgery will be a key tool to promote public health and integrating eye care to primary health care.

Keywords: applications, telehealth, telemedicine, teleophthalmology

Procedia PDF Downloads 359
205 Promoting 21st Century Skills through Telecollaborative Learning

Authors: Saliha Ozcan

Abstract:

Technology has become an integral part of our lives, aiding individuals in accessing higher order competencies, such as global awareness, creativity, collaborative problem solving, and self-directed learning. Students need to acquire these competencies, often referred to as 21st century skills, in order to adapt to a fast changing world. Today, an ever-increasing number of schools are exploring how engagement through telecollaboration can support language learning and promote 21st century skill development in classrooms. However, little is known regarding how telecollaboration may influence the way students acquire 21st century skills. In this paper, we aim to shed light to the potential implications of telecollaborative practices in acquisition of 21st century skills. In our context, telecollaboration, which might be carried out in a variety of settings both synchronously or asynchronously, is considered as the process of communicating and working together with other people or groups from different locations through online digital tools or offline activities to co-produce a desired work output. The study presented here will describe and analyse the implementation of a telecollaborative project between two high school classes, one in Spain and the other in Sweden. The students in these classes were asked to carry out some joint activities, including creating an online platform, aimed at raising awareness of the situation of the Syrian refugees. We conduct a qualitative study in order to explore how language, culture, communication, and technology merge into the co-construction of knowledge, as well as supporting the attainment of the 21st century skills needed for network-mediated communication. To this end, we collected a significant amount of audio-visual data, including video recordings of classroom interaction and external Skype meetings. By analysing this data, we verify whether the initial pedagogical design and intended objectives of the telecollaborative project coincide with what emerges from the actual implementation of the tasks. Our findings indicate that, as well as planned activities, unplanned classroom interactions may lead to acquisition of certain 21st century skills, such as collaborative problem solving and self-directed learning. This work is part of a wider project (KONECT, EDU2013-43932-P; Spanish Ministry of Economy and Finance), which aims to explore innovative, cross-competency based teaching that can address the current gaps between today’s educational practices and the needs of informed citizens in tomorrow’s interconnected, globalised world.

Keywords: 21st century skills, telecollaboration, language learning, network mediated communication

Procedia PDF Downloads 117
204 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud

Authors: Mokopane Charles Marakalala

Abstract:

Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.

Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting

Procedia PDF Downloads 79
203 Modeling Landscape Performance: Evaluating the Performance Benefits of the Olmsted Brothers’ Proposed Parkway Designs for Los Angeles

Authors: Aaron Liggett

Abstract:

This research focuses on the visionary proposal made by the Olmsted Brothers Landscape Architecture firm in the 1920s for a network of interconnected parkways in Los Angeles. Their envisioned parkways aimed to address environmental and cultural strains by providing green space for recreation, wildlife habitat, and stormwater management while serving as multimodal transportation routes. Although the parkways were never constructed, through an evidence-based approach, this research presents a framework for evaluating the potential functionality and success of the parkways by modeling and visualizing their quantitative and qualitative landscape performance and benefits. Historical documents and innovative digital modeling tools produce detailed analysis, modeling, and visualization of the parkway designs. A set of 1928 construction documents are used to analyze and interpret the design intent of the parkways. Grading plans are digitized in CAD and modeled in Sketchup to produce 3D visualizations of the parkway. Drainage plans are digitized to model stormwater performance. Planting plans are analyzed to model urban forestry and biodiversity. The EPA's Storm Water Management Model (SWMM) predicts runoff quantity and quality. The USDA Forests Service tools evaluate carbon sequestration and air quality. Spatial and overlay analysis techniques are employed to assess urban connectivity and the spatial impacts of the parkway designs. The study reveals how the integration of blue infrastructure, green infrastructure, and transportation infrastructure within the parkway design creates a multifunctional landscape capable of offering alternative spatial and temporal uses. The analysis demonstrates the potential for multiple functional, ecological, aesthetic, and social benefits to be derived from the proposed parkways. The analysis of the Olmsted Brothers' proposed Los Angeles parkways, which predated contemporary ecological design and resiliency practices, demonstrates the potential for providing multiple functional, ecological, aesthetic, and social benefits within urban designs. The findings highlight the importance of integrated blue, green, and transportation infrastructure in creating a multifunctional landscape that simultaneously serves multiple purposes. The research contributes new methods for modeling and visualizing landscape performance benefits, providing insights and techniques for informing future designs and sustainable development strategies.

Keywords: landscape architecture, ecological urban design, greenway, landscape performance

Procedia PDF Downloads 113
202 MusicTherapy for Actors: An Exploratory Study Applied to Students from University Theatre Faculty

Authors: Adriana De Serio, Adrian Korek

Abstract:

Aims: This experiential research work presents a Group-MusicTherapy-Theatre-Plan (MusThePlan) the authors have carried out to support the actors. The MusicTherapy gives rise to individual psychophysical feedback and influences the emotional centres of the brain and the subconsciousness. Therefore, the authors underline the effectiveness of the preventive, educational, and training goals of the MusThePlan to lead theatre students and actors to deal with anxiety and to overcome psychophysical weaknesses, shyness, emotional stress in stage performances, to increase flexibility, awareness of one's identity and resources for a positive self-development and psychophysical health, to develop and strengthen social bonds, increasing a network of subjects working for social inclusion and reduction of stigma. Materials-Methods: Thirty students from the University Theatre Faculty participated in weekly music therapy sessions for two months; each session lasted 120 minutes. MusThePlan: Each session began with a free group rhythmic-sonorous-musical-production by body-percussion, voice-canto, instruments, to stimulate communication. Then, a synchronized-structured bodily-rhythmic-sonorous-musical production also involved acting, dances, movements of hands and arms, hearing, and more sensorial perceptions and speech to balance motor skills and the muscular tone. Each student could be the director-leader of the group indicating a story to inspire the group's musical production. The third step involved the students in rhythmic speech and singing drills and in vocal exercises focusing on the musical pitch to improve the intonation and on the diction to improve the articulation and lead up it to an increased intelligibility. At the end of each musictherapy session and of the two months, the Musictherapy Assessment Document was drawn up by analysis of observation protocols and two Indices by the authors: Patient-Environment-Music-Index (time to - tn) to estimate the behavior evolution, Somatic Pattern Index to monitor subject’s eye and mouth and limb motility, perspiration, before, during and after musictherapy sessions. Results: After the first month, the students (non musicians) learned to play percussion instruments and formed a musical band that played classical/modern music on the percussion instruments with the musictherapist/pianist/conductor in a public concert. At the end of the second month, the students performed a public musical theatre show, acting, dancing, singing, and playing percussion instruments. The students highlighted the importance of the playful aspects of the group musical production in order to achieve emotional contact and harmony within the group. The students said they had improved kinetic and vocal and all the skills useful for acting activity and the nourishment of the bodily and emotional balance. Conclusions: The MusThePlan makes use of some specific MusicTherapy methodological models, techniques, and strategies useful for the actors. The MusThePlan can destroy the individual "mask" and can be useful when the verbal language is unable to undermine the defense mechanisms of the subject. The MusThePlan improves actor’s psychophysical activation, motivation, gratification, knowledge of one's own possibilities, and the quality of life. Therefore, the MusThePlan could be useful to carry out targeted interventions for the actors with characteristics of repeatability, objectivity, and predictability of results. Furthermore, it would be useful to plan a University course/master in “MusicTherapy for the Theatre”.

Keywords: musictherapy, sonorous-musical energy, quality of life, theatre

Procedia PDF Downloads 58
201 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 25
200 Pixel Façade: An Idea for Programmable Building Skin

Authors: H. Jamili, S. Shakiba

Abstract:

Today, one of the main concerns of human beings is facing the unpleasant changes of the environment. Buildings are responsible for a significant amount of natural resources consumption and carbon emissions production. In such a situation, this thought comes to mind that changing each building into a phenomenon of benefit to the environment. A change in a way that each building functions as an element that supports the environment, and construction, in addition to answering the need of humans, is encouraged, the way planting a tree is, and it is no longer seen as a threat to alive beings and the planet. Prospect: Today, different ideas of developing materials that can smartly function are realizing. For instance, Programmable Materials, which in different conditions, can respond appropriately to the situation and have features of modification in shape, size, physical properties and restoration, and repair quality. Studies are to progress having this purpose to plan for these materials in a way that they are easily available, and to meet this aim, there is no need to use expensive materials and high technologies. In these cases, physical attributes of materials undertake the role of sensors, wires and actuators then materials will become into robots itself. In fact, we experience robotics without robots. In recent decades, AI and technology advances have dramatically improving the performance of materials. These achievements are a combination of software optimizations and physical productions such as multi-materials 3D printing. These capabilities enable us to program materials in order to change shape, appearance, and physical properties to interact with different situations. nIt is expected that further achievements like Memory Materials and Self-learning Materials are also added to the Smart Materials family, which are affordable, available, and of use for a variety of applications and industries. From the architectural standpoint, the building skin is significantly considered in this research, concerning the noticeable surface area the buildings skin have in urban space. The purpose of this research would be finding a way that the programmable materials be used in building skin with the aim of having an effective and positive interaction. A Pixel Façade would be a solution for programming a building skin. The Pixel Facadeincludes components that contain a series of attributes that help buildings for their needs upon their environmental criteria. A PIXEL contains series of smart materials and digital controllers together. It not only benefits its physical properties, such as control the amount of sunlight and heat, but it enhances building performance by providing a list of features, depending on situation criteria. The features will vary depending on locations and have a different function during the daytime and different seasons. The primary role of a PIXEL FAÇADE can be defined as filtering pollutions (for inside and outside of the buildings) and providing clean energy as well as interacting with other PIXEL FACADES to estimate better reactions.

Keywords: building skin, environmental crisis, pixel facade, programmable materials, smart materials

Procedia PDF Downloads 81
199 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses

Authors: Orit Baruth, Anat Cohen

Abstract:

Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.

Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions

Procedia PDF Downloads 91
198 Modern Cardiac Surgical Outcomes in Nonagenarians: A Multicentre Retrospective Observational Study

Authors: Laurence Weinberg, Dominic Walpole, Dong-Kyu Lee, Michael D’Silva, Jian W. Chan, Lachlan F. Miles, Bradley Carp, Adam Wells, Tuck S. Ngun, Siven Seevanayagam, George Matalanis, Ziauddin Ansari, Rinaldo Bellomo, Michael Yii

Abstract:

Background: There have been multiple recent advancements in the selection, optimization and management of cardiac surgical patients. However, there is limited data regarding the outcomes of nonagenarians undergoing cardiac surgery, despite this vulnerable cohort increasingly receiving these interventions. This study describes the patient characteristics, management and outcomes of a group of nonagenarians undergoing cardiac surgery in the context of contemporary peri-operative care. Methods: A retrospective observational study was conducted of patients 90 to 99 years of age (i.e., nonagenarians) who had undergone cardiac surgery requiring a classic median sternotomy (i.e., open-heart surgery). All operative indications were included. Patients who underwent minimally invasive surgery, transcatheter aortic valve implantation and thoracic aorta surgery were excluded. Data were collected from four hospitals in Victoria, Australia, over an 8-year period (January 2012 – December 2019). The primary objective was to assess six-month mortality in nonagenarians undergoing open-heart surgery and to evaluate the incidence and severity of postoperative complications using the Clavien-Dindo classification system. The secondary objective was to provide a detailed description of the characteristics and peri-operative management of this group. Results: A total of 12,358 adult patients underwent cardiac surgery at the study centers during the observation period, of whom 18 nonagenarians (0.15%) fulfilled the inclusion criteria. The median (IQR) [min-max] age was 91 years (90.0:91.8) [90-94] and 14 patients (78%) were men. Cardiovascular comorbidities, polypharmacy and frailty, were common. The median (IQR) predicted in-hospital mortality by EuroSCORE II was 6.1% (4.1-14.5). All patients were optimized preoperatively by a multidisciplinary team of surgeons, cardiologists, geriatricians and anesthetists. All index surgeries were performed on cardiopulmonary bypass. Isolated coronary artery bypass grafting (CABG) and CABG with aortic valve replacement were the most common surgeries being performed in four and five patients, respectively. Half the study group underwent surgery involving two or more major procedures (e.g. CABG and valve replacement). Surgery was undertaken emergently in 44% of patients. All patients except one experienced at least one postoperative complication. The most common complications were acute kidney injury (72%), new atrial fibrillation (44%) and delirium (39%). The highest Clavien-Dindo complication grade was IIIb occurring once each in three patients. Clavien-Dindo grade IIIa complications occurred in only one patient. The median (IQR) postoperative length of stay was 11.6 days (9.8:17.6). One patient was discharged home and all others to an inpatient rehabilitation facility. Three patients had an unplanned readmission within 30 days of discharge. All patients had follow-up to at least six months after surgery and mortality over this period was zero. The median (IQR) duration of follow-up was 11.3 months (6.0:26.4) and there were no cases of mortality observed within the available follow-up records. Conclusion: In this group of nonagenarians undergoing cardiac surgery, postoperative six-month mortality was zero. Complications were common but generally of low severity. These findings support carefully selected nonagenarian patients being offered cardiac surgery in the context of contemporary, multidisciplinary perioperative care. Further, studies are needed to assess longer-term mortality and functional and quality of life outcomes in this vulnerable surgical cohort.

Keywords: cardiac surgery, mortality, nonagenarians, postoperative complications

Procedia PDF Downloads 110
197 International Students into the Irish Higher Education System: Supporting the Transition

Authors: Tom Farrelly, Yvonne Kavanagh, Tony Murphy

Abstract:

The sharp rise in international students into Ireland has provided colleges with a number of opportunities but also a number of challenges, both at an institutional and individual lecturer level and of course for the incoming student. Previously, Ireland’s population, particularly its higher education student population was largely homogenous, largely drawn from its own shores and thus reflecting the ethnic, cultural and religious demographics of the day. However, over the twenty years Ireland witnessed considerable economic growth, downturn and subsequent growth all of which has resulted in an Ireland that has changed both culturally and demographically. Propelled by Ireland’s economic success up to the late 2000s, one of the defining features of this change was an unprecedented rise in the number of migrants, both academic and economic. In 2013, Ireland’s National Forum for the Enhancement for Teaching and Learning in Higher Education (hereafter the National Forum) invited proposals for inter-institutional collaborative projects aimed at different student groups’ transitioning in or out of higher education. Clearly, both as a country and a higher education sector we want incoming students to have a productive and enjoyable time in Ireland. One of the ways that will help the sector help the students make a successful transition is by developing strategies and polices that are well informed and student driven. This abstract outlines the research undertaken by the five colleges Institutes of Technology: Carlow; Cork; Tralee & Waterford and University College Cork) in Ireland that constitute the Southern cluster aimed at helping international students transition into the Irish higher education system. The aim of the southern clusters’ project was to develop a series of online learning units that can be accessed by prospective incoming international students prior to coming to Ireland and by Irish based lecturing staff. However, in order to make the units as relevant and informed as possible there was a strong research element to the project. As part of the southern cluster’s research strategy a large-scale online survey using SurveyMonkey was undertaken across the five colleges drawn from their respective international student communities. In total, there were 573 responses from students coming from over twenty different countries. The results from the survey have provided some interesting insights into the way that international students interact with and understand the Irish higher education system. The research and results will act as a model for consistent practice applicable across institutional clusters, thereby allowing institutions to minimise costs and focus on the unique aspects of transitioning international students into their institution.

Keywords: digital, international, support, transitions

Procedia PDF Downloads 274
196 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning

Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih

Abstract:

Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.

Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network

Procedia PDF Downloads 170
195 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 98
194 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model

Authors: T. Thein, S. Kalyar Myo

Abstract:

Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.

Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)

Procedia PDF Downloads 273
193 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 118
192 A Literature Review on the Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospitals Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre-EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors which are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality and the data were analysed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system which can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analysed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospitals staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: communication, emergency communication services, emergency medical teams, emergency physicians, emergency nursing, paramedics, information and communication technology, communication systems

Procedia PDF Downloads 79
191 Lessons Learned from a Chronic Care Behavior Change Program: Outcome to Make Physical Activity a Habit

Authors: Doaa Alhaboby

Abstract:

Behavior change is a complex process that often requires ongoing support and guidance. Telecoaching programs have emerged as effective tools in facilitating behavior change by providing personalized support remotely. This abstract explores the lessons learned from a randomized controlled trial (RCT) evaluation of a telecoaching program focused on behavior change for Diabetics and discusses strategies for implementing these lessons to overcome the challenge of making physical activity a habit. The telecoaching program involved participants engaging in regular coaching sessions delivered via phone calls. These sessions aimed to address various aspects of behavior change, including goal setting, self-monitoring, problem-solving, and social support. Over the course of the program, participants received personalized guidance tailored to their unique needs and preferences. One of the key lessons learned from the RCT was the importance of engagement, readiness to change and the use of technology. Participants who set specific, measurable, attainable, relevant, and time-bound (SMART) goals were more likely to make sustained progress toward behavior change. Additionally, regular self-monitoring of behavior and progress was found to be instrumental in promoting accountability and motivation. Moving forward, implementing the lessons learned from the RCT can help individuals overcome the hardest part of behavior change: making physical activity a habit. One strategy is to prioritize consistency and establish a regular routine for physical activity. This may involve scheduling workouts at the same time each day or week and treating them as non-negotiable appointments. Additionally, integrating physical activity into daily life routines and taking into consideration the main challenges that can stop the process of integrating physical activity routines into the daily schedule can help make it more habitual. Furthermore, leveraging technology and digital tools can enhance adherence to physical activity goals. Mobile apps, wearable activity trackers, and online fitness communities can provide ongoing support, motivation, and accountability. These tools can also facilitate self-monitoring of behavior and progress, allowing individuals to track their activity levels and adjust their goals as needed. In conclusion, telecoaching programs offer valuable insights into behavior change and provide strategies for overcoming challenges, such as making physical activity a habit. By applying the lessons learned from these programs and incorporating them into daily life, individuals can cultivate sustainable habits that support their long-term health and well-being.

Keywords: lifestyle, behavior change, physical activity, chronic conditions

Procedia PDF Downloads 46
190 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 55