Search results for: technical trading
353 Towards Creative Movie Title Generation Using Deep Neural Models
Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie
Abstract:
Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.Keywords: creativity, deep machine learning, natural language generation, movies
Procedia PDF Downloads 327352 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process
Authors: Johannes Gantner, Michael Held, Matthias Fischer
Abstract:
The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation
Procedia PDF Downloads 286351 Interpreter Scholarship Program That Improves Language Services in New South Wales: A Participatory Action Research Approach
Authors: Carly Copolov, Rema Nazha, Sahba C. Delshad, George Bisas
Abstract:
In New South Wales (NSW), Australia, we speak more than 275 languages and dialects. Interpreters play an indispensable role in our multicultural society by ensuring the people of NSW all enjoy the same opportunities. The NSW Government offers scholarships to enable people who speak in-demand and high priority languages to become eligible to be practicing interpreters. The NSW Interpreter Scholarship Program was launched in January 2019, targeting priority languages from new and emerging, as well as existing language communities. The program offers fully-funded scholarships to study at Technical and Further Education (TAFE), receive National Accreditation Authority for Translators and Interpreters (NAATI) certification, and be mentored and gain employment with the interpreter panel of Multicultural NSW. A Participatory Action Research approach was engaged to challenge the current system for people to become practicing interpreters in NSW. There were over 800 metro Sydney applications and close to 200 regional applications. Three courses were run through TAFE NSW (2 in metro Sydney and 1 in regional NSW). Thirty-nine students graduated from the program in 2019. The first metro Sydney location had 18 graduates complete the course in Assyrian, Burmese, Chaldean, Kurdish-Kurmanji, Nepali, and Tibetan. The second metro Sydney location had 9 graduates complete the course in Tongan, Kirundi, Mongolian and Italian. The regional location had 12 graduates who complete the course from new emerging language communities such as Kurdish-Kurmanji, Burmese, Zomi Chin, Hakha Chin, and Tigrinya. The findings showed that students were very positive about the program as the large majority said they were satisfied with the course content, they felt prepared for the NAATI test at the conclusion of the course, and they would definitely recommend the program to their friends. Also, 18 students from the 2019 cohort signed up to receive further mentoring by experienced interpreters. In 2020 it is anticipated that 3 courses will be run through TAFE NSW (2 in regional NSW and 1 in metro Sydney) to reflect the needs of new emerging language communities settling in regional areas. In conclusion, it has been demonstrated that the NSW Interpreter Scholarship Program improves the supply, quality, and use of language services in NSW, Australia, so that people who speak in-demand and high priority languages are ensured better access to crucial government servicesKeywords: interpreting, emerging communities, scholarship program, Sydney
Procedia PDF Downloads 147350 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 188349 Understanding the Benefits of Multiple-Use Water Systems (MUS) for Smallholder Farmers in the Rural Hills of Nepal
Authors: RAJ KUMAR G.C.
Abstract:
There are tremendous opportunities to maximize smallholder farmers’ income from small-scale water resource development through micro irrigation and multiple-use water systems (MUS). MUS are an improved water management approach, developed and tested successfully by iDE that pipes water to a community both for domestic use and for agriculture using efficient micro irrigation. Different MUS models address different landscape constraints, water demand, and users’ preferences. MUS are complemented by micro irrigation kits, which were developed by iDE to enable farmers to grow high-value crops year-round and to use limited water resources efficiently. Over the last 15 years, iDE’s promotion of the MUS approach has encouraged government and other key stakeholders to invest in MUS for better planning of scarce water resources. Currently, about 60% of the cost of MUS construction is covered by the government and community. Based on iDE’s experience, a gravity-fed MUS costs approximately $125 USD per household to construct, and it can increase household income by $300 USD per year. A key element of the MUS approach is keeping farmers well linked to input supply systems and local produce collection centers, which helps to ensure that the farmers can produce a sufficient quantity of high-quality produce that earns a fair price. This process in turn creates an enabling environment for smallholders to invest in MUS and micro irrigation. Therefore, MUS should be seen as an integrated package of interventions –the end users, water sources, technologies, and the marketplace– that together enhance technical, financial, and institutional sustainability. Communities are trained to participate in sustainable water resource management as a part of the MUS planning and construction process. The MUS approach is cost-effective, improves community governance of scarce water resources, helps smallholder farmers to improve rural health and livelihoods, and promotes gender equity. MUS systems are simple to maintain and communities are trained to ensure that they can undertake minor maintenance procedures themselves. All in all, the iDE Nepal MUS offers multiple benefits and represents a practical and sustainable model of the MUS approach. Moreover, there is a growing national consensus that rural water supply systems should be designed for multiple uses, acknowledging that substantial work remains in developing national-level and local capacity and policies for scale-up.Keywords: multiple-use water systems , small scale water resources, rural livelihoods, practical and sustainable model
Procedia PDF Downloads 291348 Creating Futures: Using Fictive Scripting Methods for Institutional Strategic Planning
Authors: Christine Winberg, James Garraway
Abstract:
Many key university documents, such as vision and mission statements and strategic plans, are aspirational and future-oriented. There is a wide range of future-oriented methods that are used in planning applications, ranging from mathematical modelling to expert opinions. Many of these methods have limitations, and planners using these tools might, for example, make the technical-rational assumption that their plans will unfold in a logical and inevitable fashion, thus underestimating the many complex forces that are at play in planning for an unknown future. This is the issue that this study addresses. The overall project aim was to assist a new university of technology in developing appropriate responses to its social responsibility, graduate employability and research missions in its strategic plan. The specific research question guiding the research activities and approach was: how might the use of innovative future-oriented planning tools enable or constrain a strategic planning process? The research objective was to engage collaborating groups in the use of an innovative tool to develop and assess future scenarios, for the purpose of developing deeper understandings of possible futures and their challenges. The scenario planning tool chosen was ‘fictive scripting’, an analytical technique derived from Technology Forecasting and Innovation Studies. Fictive scripts are future projections that also take into account the present shape of the world and current developments. The process thus began with a critical diagnosis of the present, highlighting its tensions and frictions. The collaborative groups then developed fictive scripts, each group producing a future scenario that foregrounded different institutional missions, their implications and possible consequences. The scripts were analyzed with a view to identifying their potential contribution to the university’s strategic planning exercise. The unfolding fictive scripts revealed a number of insights in terms of unexpected benefits, unexpected challenges, and unexpected consequences. These insights were not evident in previous strategic planning exercises. The contribution that this study offers is to show how better choices can be made and potential pitfalls avoided through a systematic foresight exercise. When universities develop strategic planning documents, they are looking into the future. In this paper it is argued that the use of appropriate tools for future-oriented exercises, can help planners to understand more fully what achieving desired outcomes might entail, what challenges might be encountered, and what unexpected consequences might ensue.Keywords: fictive scripts, scenarios, strategic planning, technological forecasting
Procedia PDF Downloads 122347 Analysis of Determinants of Growth of Small and Medium Enterprises in Kwara State, Nigeria
Authors: Hussaini Tunde Subairu
Abstract:
Small and Medium Enterprises (SMEs) sectors serve as catalyst for employment generation, national growth, poverty reduction and economic development in developing and developed countries. However, in Nigeria despite copious and plethora of government policies and stimulus schemes directed at SMEs, the sector is still characterized by high rate of failure and discontinuities. This study therefore investigated owners/managers profile, firms characteristics and external factors as possible determinants of SMEs growth from selected SMEs in Kwara State. Primary data were sourced from 200 SMEs respondents registered with the National Association of Small and Medium Enterprises (NASMES) in Kwara State Central Senatorial District. Multiple Regressions Analysis (MRA) was used to analyze the relationship between dependent and independent variables, and pair wise correlation was employed to examine the relationship among independent variables. The Analysis of Variable (ANOVA) was employed to indicate the overall significant of the model The findings revealed that Analysis of variance (ANOVA) put the value of F-statistics at 420.45 and p-value at 0.000 was significant. The values of R2 and Adjusted R2 of 0.9643 and 0.9620 respectively suggested that 96 percent of variations in employment growth were explained by the explanatory variables. The level of technical and managerial education has t- value of 24.14 and p-value of 0.001, length of managers/owners experience in similar trade with t- value of 21.37 and p-value of 0.001, age of managers/owners with t- value of 42.98 and p-value of 0.001, firm age with t- value of 25.91 and p-value of 0.001, numbers of firms in a cluster with t- value of 7.20 and p-value of 0.001, access to formal finance with t-value of 5.56 and p-value of 0.001, firm technology innovation with t- value of 25.32 and p-value of 0.01, institutional support with t- value of 18.89 and p-value of 0.01, globalization with t- value of 9.78 and p-value of 0.01, and infrastructure with t-value of 10.75 and p-value of 0.01. The result also indicated that initial size has t-value of -1.71 and p-value of 0.090 which is consistent with Gibrat’s Law. The study concluded that owners/managers profile, firm specific characteristics and external factors substantially influenced employment growths of SMEs in the study area. Therefore, policy implication should enhance human capital development of SMEs owners/managers, and strengthen fiscal policy thrust through imposition on tariff regime to minimize effect of globalization. Governments at all level must support SMEs growth radically and enhance institutional support for SMEs growth and radically and significantly upgrading key infrastructure as rail/roads, rail, telecommunications, water and power.Keywords: external factors, firm specific characteristics, owners / manager profile, small and medium enterprises
Procedia PDF Downloads 246346 Lessons Learned from Push-Plus Implementation in Northern Nigeria
Authors: Aisha Giwa, Mohammed-Faosy Adeniran, Olufunke Femi-Ojo
Abstract:
Four decades ago, the World Health Organization (WHO) launched the Expanded Programme on Immunization (EPI). The EPI blueprint laid out the technical and managerial functions necessary to routinely vaccinate children with a limited number of vaccines, providing protection against diphtheria, tetanus, whooping cough, measles, polio, and tuberculosis, and to prevent maternal and neonatal tetanus by vaccinating women of childbearing age with tetanus toxoid. Despite global efforts, the Routine Immunization (RI) coverage in two of the World Health Organization (WHO) regions; the African Region and the South-East Asia Region, still remains short of its targets. As a result, the WHO Regional Director for Africa declared 2012 as the year for intensifying RI in these regions and this also coincided with the declaration of polio as a programmatic emergency by the WHO Executive Board. In order to intensify routine immunization, the National Routine Immunization Strategic Plan (2013-2015) stated that its core priority is to ensure 100% adequacy and availability of vaccines for safe immunization. To achieve 100% availability, the “PUSH System” and then “Push-Plus” were adopted for vaccine distribution, which replaced the inefficient “PULL” method. The NPHCDA plays the key role in coordinating activities in area advocacy, capacity building, engagement of 3PL for the state as well as monitoring and evaluation of the vaccine delivery process. eHealth Africa (eHA) is a player as a 3PL service provider engaged by State Primary Health Care Boards (SPHCDB) to ensure vaccine availability through Vaccine Direct Delivery (VDD) project which is essential to successful routine immunization services. The VDD project ensures the availability and adequate supply of high-quality vaccines and immunization-related materials to last-mile facilities. eHA’s commitment to the VDD project saw the need for an assessment of the project vis-a-vis the overall project performance, evaluation of a process for necessary improvement suggestions as well as general impact across Kano State (Where eHA had transitioned to the state), Bauchi State (currently manage delivery to all LGAs except 3 LGAs currently being managed by the state), Sokoto State (eHA currently covers all LGAs) and Zamfara State (Currently, in-sourced and managed solely by the state).Keywords: cold chain logistics, health supply chain system strengthening, logistics management information system, vaccine delivery traceability and accountability
Procedia PDF Downloads 317345 Corrosion Analysis of a 3-1/2” Production Tubing of an Offshore Oil and Gas Well
Authors: Suraj Makkar, Asis Isor, Jeetendra Gupta, Simran Bareja, Maushumi K. Talukdar
Abstract:
During the exploratory testing phase of an offshore oil and gas well, when the tubing string was pulled out after production testing, it was observed that there was visible corrosion/pitting in a few of the 3-1/2” API 5 CT L-80 Grade tubing. The area of corrosion was at the same location in all the tubing, i.e., just above the pin end. Since the corrosion was observed in the tubing within two months of their installation, it was a matter of concern, as it could lead to premature failures resulting in leakages and production loss and thus affecting the integrity of the asset. Therefore, the tubing was analysed to ascertain the mechanism of the corrosion occurring on its surface. During the visual inspection, it was observed that the corrosion was totally external, which was near the pin end, and no significant internal corrosion was observed. The chemical compositional analysis and mechanical properties (tensile and impact) show that the pipeline material was conforming to API 5 CT L-80 specifications. The metallographic analysis of the tubing revealed tempered martensitic microstructure. The grain size was observed to be different at the pin end as compared to the microstructure at base metal. The microstructures of the corroded area near threads reveal an oriented microstructure. The clearly oriented microstructure of the cold-worked zone near threads and the difference in microstructure represents inappropriate heat treatment after cold work. This was substantiated by hardness test results as well, which show higher hardness at the pin end in comparison to hardness at base metal. Scanning Electron Microscope (SEM) analysis revealed the presence of round and deep pits and cracks on the corroded surface of the tubing. The cracks were stress corrosion cracks in a corrosive environment arising out of the residual stress, which was not relieved after cold working, as mentioned above. Energy Dispersive Spectroscopy (EDS) analysis indicates the presence of mainly Fe₂O₃, Chlorides, Sulphides, and Silica in the corroded part indicating the interaction of the tubing with the well completion fluid and well bore environment. Thus it was concluded that residual stress after the cold working of male pins during threading and the corrosive environment acted in synergy to cause this pitting corrosion attack on the highly stressed zone along the circumference of the tubing just below the threaded area. Accordingly, the following suitable recommendations were given to avoid the recurrence of such corrosion problems in the wells. (i) After any kind of hot work/cold work, tubing should be normalized at full length to achieve uniform microstructure throughout its length. (ii) Heat treatment requirements (as per API 5 CT) should be part of technical specifications while at the procurement stage.Keywords: pin end, microstructure, grain size, stress corrosion cracks
Procedia PDF Downloads 80344 MARISTEM: A COST Action Focused on Stem Cells of Aquatic Invertebrates
Authors: Arzu Karahan, Loriano Ballarin, Baruch Rinkevich
Abstract:
Marine invertebrates, the highly diverse phyla of multicellular organisms, represent phenomena that are either not found or highly restricted in the vertebrates. These include phenomena like budding, fission, a fusion of ramets, and high regeneration power, such as the ability to create whole new organisms from either tiny parental fragment, many of which are controlled by totipotent, pluripotent, and multipotent stem cells. Thus, there is very much that can be learned from these organisms on the practical and evolutionary levels, further resembling Darwin's words, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change”. The ‘stem cell’ notion highlights a cell that has the ability to continuously divide and differentiate into various progenitors and daughter cells. In vertebrates, adult stem cells are rare cells defined as lineage-restricted (multipotent at best) with tissue or organ-specific activities that are located in defined niches and further regulate the machinery of homeostasis, repair, and regeneration. They are usually categorized by their morphology, tissue of origin, plasticity, and potency. The above description not always holds when comparing the vertebrates with marine invertebrates’ stem cells that display wider ranges of plasticity and diversity at the taxonomic and the cellular levels. While marine/aquatic invertebrates stem cells (MISC) have recently raised more scientific interest, the know-how is still behind the attraction they deserve. MISC, not only are highly potent but, in many cases, are abundant (e.g., 1/3 of the entire animal cells), do not locate in permanent niches, participates in delayed-aging and whole-body regeneration phenomena, the knowledge of which can be clinically relevant. Moreover, they have massive hidden potential for the discovery of new bioactive molecules that can be used for human health (antitumor, antimicrobial) and biotechnology. The MARISTEM COST action (Stem Cells of Marine/Aquatic Invertebrates: From Basic Research to Innovative Applications) aims to connect the European fragmented MISC community. Under this scientific umbrella, the action conceptualizes the idea for adult stem cells that do not share many properties with the vertebrates’ stem cells, organizes meetings, summer schools, and workshops, stimulating young researchers, supplying technical and adviser support via short-term scientific studies, making new bridges between the MISC community and biomedical disciplines.Keywords: aquatic/marine invertebrates, adult stem cell, regeneration, cell cultures, bioactive molecules
Procedia PDF Downloads 169343 Automatic Moderation of Toxic Comments in the Face of Local Language Complexity in Senegal
Authors: Edouard Ngor Sarr, Abel Diatta, Serigne Mor Toure, Ousmane Sall, Lamine Faty
Abstract:
Thanks to Web 2, we are witnessing a form of democratization of the spoken word, an exponential increase in the number of users on the web, but also, and above all, the accumulation of a daily flow of content that is becoming, at times, uncontrollable. Added to this is the rise of a violent social fabric characterised by hateful and racial comments, insults, and other content that contravenes social rules and the platforms' terms of use. Consequently, managing and regulating this mass of new content is proving increasingly difficult, requiring substantial human, technical, and technological resources. Without regulation and with the complicity of anonymity, this toxic content can pollute discussions and make these online spaces highly conducive to abuse, which very often has serious consequences for certain internet users, ranging from anxiety to suicide, depression, or withdrawal. The toxicity of a comment is defined as anything that is rude, disrespectful, or likely to cause someone to leave a discussion or to take violent action against a person or a community. Two levels of measures are needed to deal with this deleterious situation. The first measures are being taken by governments through draft laws with a dual objective: (i) to punish the perpetrators of these abuses and (ii) to make online platforms accountable for the mistakes made by their users. The second measure comes from the platforms themselves. By assessing the content left by users, they can set up filters to block and/or delete content or decide to suspend the user in question for good. However, the speed of discussions and the volume of data involved mean that platforms are unable to properly monitor the moderation of content produced by Internet users. That's why they use human moderators, either through recruitment or outsourcing. Moderating comments on the web means assessing and monitoring users‘ comments on online platforms in order to strike the right balance between protection against abuse and users’ freedom of expression. It makes it possible to determine which publications and users are allowed to remain online and which are deleted or suspended, how authorised publications are displayed, and what actions accompany content deletions. In this study, we look at the problem of automatic moderation of toxic comments in the face of local African languages and, more specifically, on social network comments in Senegal. We review the state of the art, highlighting the different approaches, algorithms, and tools for moderating comments. We also study the issues and challenges of moderation in the face of web ecosystems with lesser-known languages, such as local languages.Keywords: moderation, local languages, Senegal, toxic comments
Procedia PDF Downloads 12342 Integration of Technology into Nursing Education: A Collaboration between College of Nursing and University Research Center
Authors: Lori Lioce, Gary Maddux, Norven Goddard, Ishella Fogle, Bernard Schroer
Abstract:
This paper presents the integration of technologies into nursing education. The collaborative effort includes the College of Nursing (CoN) at the University of Alabama in Huntsville (UAH) and the UAH Systems Management and Production Center (SMAP). The faculty at the CoN conducts needs assessments to identify education and training requirements. A team of CoN faculty and SMAP engineers then prioritize these requirements and establish improvement/development teams. The development teams consist of nurses to evaluate the models and to provide feedback and of undergraduate engineering students and their senior staff mentors from SMAP. The SMAP engineering staff develops and creates the physical models using 3D printing, silicone molds and specialized molding mixtures and techniques. The collaboration has focused on developing teaching and training, or clinical, simulators. In addition, the onset of the Covid-19 pandemic has intensified this relationship, as 3D modeling shifted to supplied personal protection equipment (PPE) to local health care providers. A secondary collaboration has been introducing students to clinical benchmarking through the UAH Center for Management and Economic Research. As a result of these successful collaborations the Model Exchange & Development of Nursing & Engineering Technology (MEDNET) has been established. MEDNET seeks to extend and expand the linkage between engineering and nursing to K-12 schools, technical schools and medical facilities in the region to the resources available from the CoN and SMAP. As an example, stereolithography (STL) files of the 3D printed models, along with the specifications to fabricate models, are available on the MEDNET website. Ten 3D printed models have been developed and are currently in use by the CoN. The following additional training simulators are currently under development:1) suture pads, 2) gelatin wound models and 3) printed wound tattoos. Specification sheets have been written for these simulations that describe the use, fabrication procedures and parts list. These specifications are available for viewing and download on MEDNET. Included in this paper are 1) descriptions of CoN, SMAP and MEDNET, 2) collaborative process used in product improvement/development, 3) 3D printed models of training and teaching simulators, 4) training simulators under development with specification sheets, 5) family care practice benchmarking, 6) integrating the simulators into the nursing curriculum, 7) utilizing MEDNET as a pandemic response, and 8) conclusions and lessons learned.Keywords: 3D printing, nursing education, simulation, trainers
Procedia PDF Downloads 122341 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 245340 Perceived Restorativeness Scale– 6: A Short Version of the Perceived Restorativeness Scale for Mixed (or Mobile) Devices
Authors: Sara Gallo, Margherita Pasini, Margherita Brondino, Daniela Raccanello, Roberto Burro, Elisa Menardo
Abstract:
Most of the studies on the ability of environments to recover people’s cognitive resources have been conducted in laboratory using simulated environments (e.g., photographs, videos, or virtual reality), based on the implicit assumption that exposure to simulated environments has the same effects of exposure to real environments. However, the technical characteristics of simulated environments, such as the dynamic or static characteristics of the stimulus, critically affect their perception. Measuring perceived restorativeness in situ rather than in laboratory could increase the validity of the obtained measurements. Personal mobile devices could be useful because they allow accessing immediately online surveys when people are directly exposed to an environment. At the same time, it becomes important to develop short and reliable measuring instruments that allow a quick assessment of the restorative qualities of the environments. One of the frequently used self-report measures to assess perceived restorativeness is the “Perceived Restorativeness Scale” (PRS) based on Attention Restoration Theory. A lot of different versions have been proposed and used according to different research purposes and needs, without studying their validity. This longitudinal study reported some preliminary validation analyses on a short version of original scale, the PRS-6, developed to be quick and mobile-friendly. It is composed of 6 items assessing fascination and being-away. 102 Italian university students participated to the study, 84% female with age ranging from 18 to 47 (M = 20.7; SD = 2.9). Data were obtained through a survey online that asked them to report their perceived restorativeness of the environment they were in (and the kind of environment) and their positive emotion (Positive and Negative Affective Schedule, PANAS) once a day for seven days. Cronbach alpha and item-total correlations were used to assess reliability and internal consistency. Confirmatory Factor Analyses (CFA) models were run to study the factorial structure (construct validity). Correlation analyses between PRS and PANAS scores were used to check discriminant validity. In the end, multigroup CFA models were used to study measurement invariance (configural, metric, scalar, strict) between different mobile devices and between day of assessment. On the whole, the PRS-6 showed good psychometric proprieties, similar to those of the original scale, and invariance across devices and days. These results suggested that the PRS-6 could be a valid alternative to assess perceived restorativeness when researchers need a brief and immediate evaluation of the recovery quality of an environment.Keywords: restorativeness, validation, short scale development, psychometrics proprieties
Procedia PDF Downloads 254339 Building the Professional Readiness of Graduates from Day One: An Empirical Approach to Curriculum Continuous Improvement
Authors: Fiona Wahr, Sitalakshmi Venkatraman
Abstract:
Industry employers require new graduates to bring with them a range of knowledge, skills and abilities which mean these new employees can immediately make valuable work contributions. These will be a combination of discipline and professional knowledge, skills and abilities which give graduates the technical capabilities to solve practical problems whilst interacting with a range of stakeholders. Underpinning the development of these disciplines and professional knowledge, skills and abilities, are “enabling” knowledge, skills and abilities which assist students to engage in learning. These are academic and learning skills which are essential to common starting points for both the learning process of students entering the course as well as forming the foundation for the fully developed graduate knowledge, skills and abilities. This paper reports on a project created to introduce and strengthen these enabling skills into the first semester of a Bachelor of Information Technology degree in an Australian polytechnic. The project uses an action research approach in the context of ongoing continuous improvement for the course to enhance the overall learning experience, learning sequencing, graduate outcomes, and most importantly, in the first semester, student engagement and retention. The focus of this is implementing the new curriculum in first semester subjects of the course with the aim of developing the “enabling” learning skills, such as literacy, research and numeracy based knowledge, skills and abilities (KSAs). The approach used for the introduction and embedding of these KSAs, (as both enablers of learning and to underpin graduate attribute development), is presented. Building on previous publications which reported different aspects of this longitudinal study, this paper recaps on the rationale for the curriculum redevelopment and then presents the quantitative findings of entering students’ reading literacy and numeracy knowledge and skills degree as well as their perceived research ability. The paper presents the methodology and findings for this stage of the research. Overall, the cohort exhibits mixed KSA levels in these areas, with a relatively low aggregated score. In addition, the paper describes the considerations for adjusting the design and delivery of the new subjects with a targeted learning experience, in response to the feedback gained through continuous monitoring. Such a strategy is aimed at accommodating the changing learning needs of the students and serves to support them towards achieving the enabling learning goals starting from day one of their higher education studies.Keywords: enabling skills, student retention, embedded learning support, continuous improvement
Procedia PDF Downloads 249338 Analysis of Sea Waves Characteristics and Assessment of Potential Wave Power in Egyptian Mediterranean Waters
Authors: Ahmed A. El-Gindy, Elham S. El-Nashar, Abdallah Nafaa, Sameh El-Kafrawy
Abstract:
The generation of energy from marine energy became one of the most preferable resources since it is a clean source and friendly to environment. Egypt has long shores along Mediterranean with important cities that need energy resources with significant wave energy. No detailed studies have been done on wave energy distribution in the Egyptian waters. The objective of this paper is to assess the energy wave power available in the Egyptian waters for the choice of the most suitable devices to be used in this area. This paper deals the characteristics and power of the offshore waves in the Egyptian waters. Since the field observations of waves are not frequent and need much technical work, the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis data in Mediterranean, with a grid size 0.75 degree, which is a relatively course grid, are considered in the present study for preliminary assessment of sea waves characteristics and power. The used data covers the period from 2012 to 2014. The data used are significant wave height (swh), mean wave period (mwp) and wave direction taken at six hourly intervals, at seven chosen stations, and at grid points covering the Egyptian waters. The wave power (wp) formula was used to calculate energy flux. Descriptive statistical analysis including monthly means and standard deviations of the swh, mwp, and wp. The percentiles of wave heights and their corresponding power are done, as a tool of choice of the best technology suitable for the site. The surfer is used to show spatial distributions of wp. The analysis of data at chosen 7 stations determined the potential of wp off important Egyptian cities. Offshore of Al Saloum and Marsa Matruh, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and October (1.49-1.69) ± (1.45-1.74) kw/m. In front of Alexandria and Rashid, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and September (1.29-2.01) ± (1.31-1.83) kw/m. In front of Damietta and Port Said, the highest wp occurred in February (14.29-17.61) ± (21.61-27.10) kw/m and the lowest occurred in June (0.94-0.96) ± (0.71-0.72) kw/m. In winter, the probabilities of waves higher than 0.8 m in percentage were, at Al Saloum and Marsa Matruh (76.56-80.33) ± (11.62-12.05), at Alexandria and Rashid (73.67-74.79) ± (16.21-18.59) and at Damietta and Port Said (66.28-68.69) ± (17.88-17.90). In spring, the percentiles were, at Al Saloum and Marsa Matruh, (48.17-50.92) ± (5.79-6.56), at Alexandria and Rashid, (39.38-43.59) ± (9.06-9.34) and at Damietta and Port Said, (31.59-33.61) ± (10.72-11.25). In summer, the probabilities were, at Al Saloum and Marsa Matruh (57.70-66.67) ± (4.87-6.83), at Alexandria and Rashid (59.96-65.13) ± (9.14-9.35) and at Damietta and Port Said (46.38-49.28) ± (10.89-11.47). In autumn, the probabilities were, at Al Saloum and Marsa Matruh (58.75-59.56) ± (2.55-5.84), at Alexandria and Rashid (47.78-52.13) ± (3.11-7.08) and at Damietta and Port Said (41.16-42.52) ± (7.52-8.34).Keywords: distribution of sea waves energy, Egyptian Mediterranean waters, waves characteristics, waves power
Procedia PDF Downloads 194337 The Diversity of Contexts within Which Adolescents Engage with Digital Media: Contributing to More Challenging Tasks for Parents and a Need for Third Party Mediation
Authors: Ifeanyi Adigwe, Thomas Van der Walt
Abstract:
Digital media has been integrated into the social and entertainment life of young children, and as such, the impact of digital media appears to affect young people of all ages and it is believed that this will continue to shape the world of young children. Since, technological advancement of digital media presents adolescents with diverse contexts, platforms and avenues to engage with digital media outside the home environment and from parents' supervision, a wide range of new challenges has further complicated the already difficult tasks for parents and altered the landscape of parenting. Despite the fact that adolescents now have access to a wide range of digital media technologies both at home and in the learning environment, parenting practices such as active, restrictive, co-use, participatory and technical mediations are important in mitigating of online risks adolescents may encounter as a result of digital media use. However, these mediation practices only focus on the home environment including digital media present in the home and may not necessarily transcend outside the home and other learning environments where adolescents use digital media for school work and other activities. This poses the question of who mediates adolescent's digital media use outside the home environment. The learning environment could be a ''loose platform'' where an adolescent can maximise digital media use considering the fact that there is no restriction in terms of content and time allotted to using digital media during school hours. That is to say that an adolescent can play the ''bad boy'' online in school because there is little or no restriction of digital media use and be exposed to online risks and play the ''good boy'' at home because of ''heavy'' parental mediation. This is the reason why parent mediation practices have been ineffective because a parent may not be able to track adolescents digital media use considering the diversity of contexts, platforms and avenues adolescents use digital media. This study argues that due to the diverse nature of digital media technology, parents may not be able to monitor the 'whereabouts' of their children in the digital space. This is because adolescent digital media usage may not only be confined to the home environment but other learning environments like schools. This calls for urgent attention on the part of teachers to understand the intricacies of how digital media continue to shape the world in which young children are developing and learning. It is, therefore, imperative for parents to liaise with the schools of their children to mediate digital media use during school hours. The implication of parents- teachers mediation practices are discussed. The article concludes by suggesting that third party mediation by teachers in schools and other learning environments should be encouraged and future research needs to consider the emergent strategy of teacher-children mediation approach and the implication for policy for both the home and learning environments.Keywords: digital media, digital age, parent mediation, third party mediation
Procedia PDF Downloads 159336 Psychological Variables Predicting Academic Achievement in Argentinian Students: Scales Development and Recent Findings
Authors: Fernandez liporace, Mercedes Uriel Fabiana
Abstract:
Academic achievement in high school and college students is currently a matter of concern. National and international assessments show high schoolers as low achievers, and local statistics indicate alarming dropout percentages in this educational level. Even so, 80% of those students intend attending higher education. On the other hand, applications to Public National Universities are free and non-selective by examination procedures. Though initial registrations are massive (307.894 students), only 50% of freshmen pass their first year classes, and 23% achieves a degree. Low performances use to be a common problem. Hence, freshmen adaptation, their adjustment, dropout and low academic achievement arise as topics of agenda. Besides, the hinge between high school and college must be examined in depth, in order to get an integrated and successful path from one educational stratum to the other. Psychology aims at developing two main research lines to analyse the situation. One regarding psychometric scales, designing and/or adapting tests, examining their technical properties and their theoretical validity (e.g., academic motivation, learning strategies, learning styles, coping, perceived social support, parenting styles and parental consistency, paradoxical personality as correlated to creative skills, psychopathological symptomatology). The second research line emphasizes relationships within the variables measured by the former scales, facing the formulation and testing of predictive models of academic achievement, establishing differences by sex, age, educational level (high school vs college), and career. Pursuing these goals, several studies were carried out in recent years, reporting findings and producing assessment technology useful to detect students academically at risk as well as good achievers. Multiple samples were analysed totalizing more than 3500 participants (2500 from college and 1000 from high school), including descriptive, correlational, group differences and explicative designs. A brief on the most relevant results is presented. Providing information to design specific interventions according to every learner’s features and his/her educational environment comes up as a mid-term accomplishment. Furthermore, that information might be helpful to adapt curricula by career, as well as for implementing special didactic strategies differentiated by sex and personal characteristics.Keywords: academic achievement, higher education, high school, psychological assessment
Procedia PDF Downloads 370335 Reducing Pressure Drop in Microscale Channel Using Constructal Theory
Authors: K. X. Cheng, A. L. Goh, K. T. Ooi
Abstract:
The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop
Procedia PDF Downloads 338334 Volunteered Geographic Information Coupled with Wildfire Fire Progression Maps: A Spatial and Temporal Tool for Incident Storytelling
Authors: Cassandra Hansen, Paul Doherty, Chris Ferner, German Whitley, Holly Torpey
Abstract:
Wildfire is a natural and inevitable occurrence, yet changing climatic conditions have increased the severity, frequency, and risk to human populations in the wildland/urban interface (WUI) of the Western United States. Rapid dissemination of accurate wildfire information is critical to both the Incident Management Team (IMT) and the affected community. With the advent of increasingly sophisticated information systems, GIS can now be used as a web platform for sharing geographic information in new and innovative ways, such as virtual story map applications. Crowdsourced information can be extraordinarily useful when coupled with authoritative information. Information abounds in the form of social media, emergency alerts, radio, and news outlets, yet many of these resources lack a spatial component when first distributed. In this study, we describe how twenty-eight volunteer GIS professionals across nine Geographic Area Coordination Centers (GACC) sourced, curated, and distributed Volunteered Geographic Information (VGI) from authoritative social media accounts focused on disseminating information about wildfires and public safety. The combination of fire progression maps with VGI incident information helps answer three critical questions about an incident, such as: where the first started. How and why the fire behaved in an extreme manner and how we can learn from the fire incident's story to respond and prepare for future fires in this area. By adding a spatial component to that shared information, this team has been able to visualize shared information about wildfire starts in an interactive map that answers three critical questions in a more intuitive way. Additionally, long-term social and technical impacts on communities are examined in relation to situational awareness of the disaster through map layers and agency links, the number of views in a particular region of a disaster, community involvement and sharing of this critical resource. Combined with a GIS platform and disaster VGI applications, this workflow and information become invaluable to communities within the WUI and bring spatial awareness for disaster preparedness, response, mitigation, and recovery. This study highlights progression maps as the ultimate storytelling mechanism through incident case studies and demonstrates the impact of VGI and sophisticated applied cartographic methodology make this an indispensable resource for authoritative information sharing.Keywords: storytelling, wildfire progression maps, volunteered geographic information, spatial and temporal
Procedia PDF Downloads 179333 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement
Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian
Abstract:
Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality
Procedia PDF Downloads 281332 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK
Authors: Esther Ekundayo
Abstract:
The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology
Procedia PDF Downloads 102331 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 193330 Children and Communities Benefit from Mother-Tongue Based Multi-Lingual Education
Authors: Binay Pattanayak
Abstract:
Multilingual state, Jharkhand is home to more than 19 tribal and regional languages. These are used by more than 33 communities in the state. The state has declared 12 of these languages as official languages of the state. However, schools in the state do not recognize any of these community languages even in early grades! Children, who speak in their mother tongues at home, local market and playground, find it very difficult to understand their teacher and textbooks in school. They fail to acquire basic literacy and numeracy skills in early grades. Out of frustration due to lack of comprehension, the majority of children leave school. Jharkhand sees the highest dropout in early grades in India. To address this, the state under the guidance of the author designed a mother tongue based pre-school education programme named Bhasha Puliya and bilingual picture dictionaries in 9 tribal and regional mother tongues of children. This contributed significantly to children’s school readiness in the school. Followed by this, the state designed a mother-tongue based multilingual education programme (MTB-MLE) for multilingual context. The author guided textbook development in 5 tribal (Santhali, Mundari, Ho, Kurukh and Kharia) and two regional (Odia and Bangla) languages. Teachers and community members were trained for MTB-MLE in around 1,000 schools of the concerned language pockets. Community resource groups were constituted along with their academic calendars in each school to promote story-telling, singing, painting, dancing, riddles, etc. with community support. This, on the one hand, created rich learning environments for children. On the other hand, the communities have discovered a great potential in the process of developing a wide variety of learning materials for children in own mother-tongue using their local stories, songs, riddles, paintings, idioms, skits, etc. as a process of their literary, cultural and technical enrichment. The majority of children are acquiring strong early grade reading skills (basic literacy and numeracy) in grades I-II thereby getting well prepared for higher studies. In a phased manner they are learning Hindi and English after 4-5 years of MTB-MLE using the foundational language learning skills. Community members have started designing new books, audio-visual learning materials in their mother-tongues seeing a great potential for their cultural and technological rejuvenation.Keywords: community resource groups, MTB-MLE, multilingual, socio-linguistic survey, learning
Procedia PDF Downloads 199329 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction
Authors: Lucas Peries, Rolla Monib
Abstract:
The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.Keywords: building information modelling, modularisation, prefabrication, technology
Procedia PDF Downloads 98328 Design, Prototyping and Testing of Manually Operated Teff Seed Cum Fertilizer Drill for Ethiopian Farmers
Authors: Fentahun Ayu Muche, Yonas Mitiku Degu
Abstract:
Ethiopian farmers traditionally sow Teff seeds using the broadcasting method. However, row sowing offers higher grain yields compared to broadcasting. Despite being introduced to row sowing techniques, many farmers prefer broadcasting due to its simplicity; without proper technology, row sowing is time-consuming, labor-intensive, and physically demanding. The use of suitable row Teff seeder technologies can save time, reduce labor requirements, facilitate weed control, and increase productivity. Unfortunately, previously promoted technologies have not gained significant acceptance due to various limitations. The Agricultural Bureau of the Amhara Region, Ethiopia, has confirmed that row sowing technology significantly improves productivity, yielding results up to twice as high as traditional sowing methods. This innovative approach offers a feasible solution for enhancing Teff production in Ethiopia, contributing to greater precision and efficiency in farming practices. This research aims to design, fabricate, and test a Teff seed-cum-fertilizer drill while addressing the shortcomings of earlier technologies. During the conceptual design phase, eight alternatives were proposed, with the rail-type row Teff seed-cum-fertilizer drill selected for its technical and economic feasibility. The chosen design features five rows with adjustable spacing between 15 cm and 25 cm. It also includes an interchangeable metering mechanism for seeding rates of 5 kg/hectare and 10 kg/hectare. A key focus was placed on the metering mechanism to eliminate power transmission via ground traction, thereby mitigating performance issues caused by wheel skidding. The new design uses pinions that roll over two parallel racks suspended by four posts to transmit motion to the metering unit. Detailed analysis of the selected concept and working mechanism was conducted, and the prototype was manufactured according to specifications from the detailed design. Laboratory and field tests of the fabricated prototype demonstrated good metering mechanism efficiency, with no significant differences between rows. However, the performance of the Teff seed-cum-fertilizer drill is highly sensitive to the seed level in the hopper. Therefore, maintaining the recommended seed level is crucial for ensuring uniform seed distribution during farm operations.Keywords: row teff planter, disc metering, scoop metering, rack and pinion, fertilizer applicator, seed drill
Procedia PDF Downloads 14327 Water Crisis or Crisis of Water Management: Assessing Water Governance in Iran
Authors: Sedigheh Kalantari
Abstract:
Like many countries in the arid and semi-arid belt, Iran experiences a natural limitation in the availability of water resources. However, rapid socioeconomic development has created a serious water crisis in a nation that was once one of the world’s pioneers in sustainable water management, due to the Persians’ contribution to hydraulic engineering inventions – the Qanat – throughout history. The exogenous issues like the changing climate, frequent droughts, and international sanctions are only crisis catalyzers, not the main cause of the water crisis; and a resilient water management system is expected to be capable of coping with these periodic external pressures. The current dramatic water security issues in Iran are rooted in managerial, political, and institutional challenges rather than engineering and technical issues, and the country is suffering from challenges in water governance. The country, instead of rigorous water conservation efforts, is still focused on supply-driven approach, technology and centralized methods, and structural solutions that aim to increase water supply; while the effectiveness of water governance and management has often left unused. To solve these issues, it is necessary to assess the present situation and its evolution over time. In this respect, establishing water governance assessment mechanisms will be a significant aspect of this paper. The research framework, however, is a conceptual framework to assess governance performance of Iran to critically diagnose problematic issues and areas, as well as proffer empirically based solutions and determine the best possible steps towards transformational processes. This concept aims to measure the adequacy of current solutions and strategies designed to ameliorate these problems and then develop and prescribe adequate futuristic solutions. Thus, the analytical framework developed in this paper seeks to provide insights on key factors influencing water governance in Iranian cities, institutional frameworks to manage water across scales and authorities, multi-level management gaps and policy responses, through an evidence-based approach and good practices to drive reform toward sustainability and water resource conservation. The findings of this paper show that the current structure of the water governance system in Iran, coupled with the lack of a comprehensive understanding of the root causes of the problem, leaves minimal hope for developing sustainable solutions to Iran’s increasing water crisis. In order to follow sustainable development approaches, Iran needs to replace symptom management with problem prevention.Keywords: governance, Iran, sustainable development, water management, water resources
Procedia PDF Downloads 29326 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries
Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman
Abstract:
There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems
Procedia PDF Downloads 150325 Applying Push Notifications with Behavioral Change Strategies in Fitness Applications: A Survey of User's Perception Based on Consumer Engagement
Authors: Yali Liu, Maria Avello Iturriagagoitia
Abstract:
Background: Fitness applications (apps) are one of the most popular mobile health (mHealth) apps. These apps can help prevent/control health issues such as obesity, which is one of the most serious public health challenges in the developed world in recent decades. Compared with the traditional intervention like face-to-face treatment, it is cheaper and more convenient to use fitness apps to interfere with physical activities and healthy behaviors. Nevertheless, fitness applications apps tend to have high abandonment rates and low levels of user engagement. Therefore, maintaining the endurance of users' usage is challenging. In fact, previous research shows a variety of strategies -goal-setting, self-monitoring, coaching, etc.- for promoting fitness and health behavior change. These strategies can influence the users’ perseverance and self-monitoring of the program as well as favoring their adherence to routines that involve a long-term behavioral change. However, commercial fitness apps rarely incorporate these strategies into their design, thus leading to a lack of engagement with the apps. Most of today’s mobile services and brands engage their users proactively via push notifications. Push notifications. These notifications are visual or auditory alerts to inform mobile users about a wide range of topics that entails an effective and personal mean of communication between the app and the user. One of the research purposes of this article is to implement the application of behavior change strategies through push notifications. Proposes: This study aims to better understand the influence that effective use of push notifications combined with the behavioral change strategies will have on users’ engagement with the fitness app. And the secondary objectives are 1) to discuss the sociodemographic differences in utilization of push notifications of fitness apps; 2) to determine the impact of each strategy in customer engagement. Methods: The study uses a combination of the Consumer Engagement Theory and UTAUT2 based model to conduct an online survey among current users of fitness apps. The questionnaire assessed attitudes to each behavioral change strategy, and sociodemographic variables. Findings: Results show the positive effect of push notifications in the generation of consumer engagement and the different impacts of each strategy among different groups of population in customer engagement. Conclusions: Fitness apps with behavior change strategies have a positive impact on increasing users’ usage time and customer engagement. Theoretical experts can participate in designing fitness applications, along with technical designers.Keywords: behavioral change, customer engagement, fitness app, push notification, UTAUT2
Procedia PDF Downloads 136324 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 168