Search results for: OpenFOAM mesh format
144 An Effective Approach to Knowledge Capture in Whole Life Costing in Constructions Project
Authors: Ndibarafinia Young Tobin, Simon Burnett
Abstract:
In spite of the benefits of implementing whole life costing technique as a valuable approach for comparing alternative building designs allowing operational cost benefits to be evaluated against any initial cost increases and also as part of procurement in the construction industry, its adoption has been relatively slow due to the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice, i.e. the lack of professionals in many establishments with knowledge and training on the use of whole life costing technique, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. This has proved to be very challenging to those who showed some willingness to employ the technique in a construction project. The knowledge generated from a project can be considered as best practices learned on how to carry out tasks in a more efficient way, or some negative lessons learned which have led to losses and slowed down the progress of the project and performance. Knowledge management in whole life costing practice can enhance whole life costing analysis execution in a construction project, as lessons learned from one project can be carried on to future projects, resulting in continuous improvement, providing knowledge that can be used in the operation and maintenance phases of an assets life span. Purpose: The purpose of this paper is to report an effective approach which can be utilised in capturing knowledge in whole life costing practice in a construction project. Design/methodology/approach: An extensive literature review was first conducted on the concept of knowledge management and whole life costing. This was followed by a semi-structured interview to explore the existing and good practice knowledge management in whole life costing practice in a construction project. The data gathered from the semi-structured interview was analyzed using content analysis and used to structure an effective knowledge capturing approach. Findings: From the results obtained in the study, it shows that the practice of project review is the common method used in the capturing of knowledge and should be undertaken in an organized and accurate manner, and results should be presented in the form of instructions or in a checklist format, forming short and precise insights. The approach developed advised that irrespective of how effective the approach to knowledge capture, the absence of an environment for sharing knowledge, would render the approach ineffective. Open culture and resources are critical for providing a knowledge sharing setting, and leadership has to sustain whole life costing knowledge capture, giving full support for its implementation. The knowledge capturing approach has been evaluated by practitioners who are experts in the area of whole life costing practice. The results have indicated that the approach to knowledge capture is suitable and efficient.Keywords: whole life costing, knowledge capture, project review, construction industry, knowledge management
Procedia PDF Downloads 260143 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 352142 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study
Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni
Abstract:
Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation
Procedia PDF Downloads 128141 Making Beehives More 'Intelligent'- The Case of Capturing, Reducing, and Managing Bee Pest Infestation in Hives through Modification of Hive Entrance Holes and the Installation of Multiple In-Hive Bee Pest Traps
Authors: Prince Amartey
Abstract:
Bees are clever creatures, thus, capturing bees implies that the hives are intelligent in the sense that they have all of the required circumstances to attract and trap the bees. If the hive goes above and beyond to keep the bees in the hive and to keep the activities of in-hive pests to a minimal in order for the bees to develop to their maximum potential, the hive is becoming or is more 'intelligent'. Some bee pests, such as tiny beehive beetles, are endemic to Africa; however, the way we now extract honey by cutting off the combs and pressing for honey prevents the spread of these bees' insect enemies. However, when we explore entering the commercialization. When freshly collected combs are returned to the hives following the adoption of the frame and other systems, there is a need to consider putting in strategies to manage the accompanying pest concerns that arise with unprotected combs.The techniques for making hives more'intelligent' are thus more important presently, given that the African apicultural business does not wish to encourage the use of pesticides in the hives. This include changing the hive's entrance holes in order to improve the bees' own mechanism for defending the entry sites, as well as collecting pests by setting exterior and in-hive traps to prevent pest infiltration into hives by any means feasible. Material and Methods: The following five (5) mechanisms are proposed to make the hives more 'intelligent.' i. The usage of modified frames with five (5) beetle traps positioned horizontally on the vertical 'legs' to catch the beetle along the combs' surfaces-multiple bee ii. Baited bioelectric frame traps, which has both vertical sections of frame covered with a 3mm mesh that allows pest entry but not bees. The pest is attracted by strips of combs of honey, open brood, pollen on metal plates inserted horizontally on the vertical ‘legs’ of the frames. An electrical ‘mine’ system in place that electrocutes the pests as they step on the wires in the trap to enter the frame trap iii. The ten rounded hive entry holes are adapted as the bees are able to police the entrance to prevent entry of pest. The holes are arranged in two rows, with one on top of the other What Are the Main Contributions of Your Research?-Results Discussions and Conclusions The techniques implemented decrease pest ingress, while in-hive traps capture those that escape entry into the hives. Furthermore, the stand alteration traps larvae and stops their growth into adults. As beekeeping commercialization grows throughout Africa, these initiatives will minimize insect infestation in hives and necessarily enhance honey output.Keywords: bee pests, modified frames, multiple beetle trap, Baited bioelectric frame traps
Procedia PDF Downloads 78140 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension
Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto
Abstract:
In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor
Procedia PDF Downloads 265139 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 297138 Instruction Program for Human Factors in Maintenance, Addressed to the People Working in Colombian Air Force Aeronautical Maintenance Area to Strengthen Operational Safety
Authors: Rafael Andres Rincon Barrera
Abstract:
Safety in global aviation plays a preponderant role in organizations that seek to avoid accidents in an attempt to preserve their most precious assets (the people and the machines). Human factors-based programs have shown to be effective in managing human-generated risks. The importance of training on human factors in maintenance has not been indifferent to the Colombian Air Force (COLAF). This research, which has a mixed quantitative, qualitative and descriptive approach, deals with its absence of structuring an instruction program in Human Factors in Aeronautical Maintenance, which serves as a tool to improve Operational Safety in the military air units of the COLAF. Research shows the trends and evolution of human factors programs in aeronautical maintenance through the analysis of a data matrix with 33 sources taken from different databases that are about the incorporation of these types of programs in the aeronautical industry in the last 20 years; as well as the improvements in the operational safety process that are presented after the implementation of these ones. Likewise, it compiles different normative guides in force from world aeronautical authorities for training in these programs, establishing a matrix of methodologies that may be applicable to develop a training program in human factors in maintenance. Subsequently, it illustrates the design, validation, and development of a human factors knowledge measurement instrument for maintenance at the COLAF that includes topics on Human Factors (HF), Safety Management System (SMS), and aeronautical maintenance regulations at the COLAF. With the information obtained, it performs the statistical analysis showing the aspects of knowledge and strengthening the staff for the preparation of the instruction program. Performing data triangulation based on the applicable methods and the weakest aspects found in the maintenance people shows a variable crossing from color coding, thus indicating the contents according to a training program for human factors in aeronautical maintenance, which are adjusted according to the competencies that are expected to be developed with the staff in a curricular format established by the COLAF. Among the most important findings are the determination that different authors are dealing with human factors in maintenance agrees that there is no standard model for its instruction and implementation, but that it must be adapted to the needs of the organization, that the Safety Culture in the Companies which incorporated programs on human factors in maintenance increased, that from the data obtained with the instrument for knowledge measurement of human factors in maintenance, the level of knowledge is MEDIUM-LOW with a score of 61.79%. And finally that there is an opportunity to improve Operational Safety for the COLAF through the implementation of the training program of human factors in maintenance for the technicians working in this area.Keywords: Colombian air force, human factors, safety culture, safety management system, triangulation
Procedia PDF Downloads 134137 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation
Authors: Hortènsia Curell, Ana Fernández-Montraveta
Abstract:
This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.Keywords: academic writing, attitude, corpus, english
Procedia PDF Downloads 74136 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 182135 Comparing the Effectiveness of the Crushing and Grinding Route of Comminution to That of the Mine to Mill Route in Terms of the Percentage of Middlings Present in Processed Lead-Zinc Ore Samples
Authors: Chinedu F. Anochie
Abstract:
The presence of gangue particles in recovered metal concentrates has been a serious challenge to ore dressing engineers. Middlings lower the quality of concentrates, and in most cases, drastically affect the smelter terms, owing to exorbitant amounts paid by Mineral Processing industries as treatment charge. Models which encourage optimization of liberation operations have been utilized in most ore beneficiation industries to reduce the presence of locked particles in valuable concentrates. Moreover, methods such as incorporation of regrind mills, scavenger, rougher and cleaner cells, to the milling and flotation plants has been widely employed to tackle these concerns, and to optimize the grade–recovery relationship of metal concentrates. This work compared the crushing and grinding method of liberation, to the mine to mill route, by evaluating the proportion of middlings present in selectively processed complex Pb-Zn ore samples. To establish the effect of size reduction operations on the percentage of locked particles present in recovered concentrates, two similar samples of complex Pb- Zn ores were processed. Following blasting operation, the first ore sample was ground directly in a ball mill (Mine to Mill Route of Comminution), while the other sample was manually crushed, and subsequently ground in the ball mill (Crushing and Grinding Route of Comminution). The two samples were separately sieved in a mesh to obtain the desired representative particle sizes. An equal amount of each sample that would be processed in the flotation circuit was then obtained with the aid of a weighing balance. These weighed fine particles were simultaneously processed in the flotation circuit using the selective flotation technique. Sodium cyanide, Methyl isobutyl carbinol, Sodium ethyl xanthate, Copper sulphate, Sodium hydroxide, Lime and Isopropyl xanthate, were the reagents used to effect differential flotation of the two ore samples. Analysis and calculations showed that the degree of liberation obtained for the ore sample which went through the conventional crushing and grinding route of comminution, was higher than that of the directly milled run off mine (ROM) ore. Similarly, the proportion of middlings obtained from the separated galena (PbS) and sphalerite (ZnS) concentrates, were lower for the crushed and ground ore sample. A concise data which proved that the mine to mill method of size reduction is not the most ideal technique for the recovery of quality metal concentrates has been established.Keywords: comminution, degree of liberation, middlings, mine to mill
Procedia PDF Downloads 133134 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: Woranittha Kritsarikan
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 177133 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 87132 To Compare the Visual Outcome, Safety and Efficacy of Phacoemulsification and Small-Incision Cataract Surgery (SICS) at CEITC, Bangladesh
Authors: Rajib Husain, Munirujzaman Osmani, Mohammad Shamsal Islam
Abstract:
Purpose: To compare the safety, efficacy and visual outcome of phacoemulsification vs. manual small-incision cataract surgery (SICS) for the treatment of cataract in Bangladesh. Objectives: 1. To assess the Visual outcome after cataract surgery 2. To understand the post-operative complications and early rehabilitation 3. To identified which surgical procedure more attractive to the patients 4. To identify which surgical procedure is occurred fewer complications. 5. To find out the socio-economic and demographic characteristics of study patients Setting: Chittagong Eye Infirmary and Training Complex, Chittagong, Bangladesh. Design: Retrospective, randomised comparison of 300 patients with visually significant cataracts. Method: The present study was designed as a retrospective hospital-based research. The sample size was 300 and study period was from July, 2012 to July, 2013 and assigned randomly to receive either phacoemulsification or manual small-incision cataract surgery (SICS). Preoperative and post-operative data were collected through a well designed collection format. Three follow-up were done; i) during discharge ii) 1-3 weeks & iii) 4-11 weeks post operatively. All preoperative and surgical complications, uncorrected and best-corrected visual acuity (BCVA) and astigmatism were taken into consideration for comparison of outcome Result: Nearly 95% patients were more than 40 years of age. About 52% patients were female, and 48% were male. 52% (N=157) patients came to operate their first eye where 48% (N=143) patients were visited again to operate their second eye. Postoperatively, five eyes (3.33%) developed corneal oedema with >10 Descemets folds, and six eyes (4%) had corneal oedema with <10 Descemets folds for Phacoemulsification surgeries. For SICS surgeries, seven eyes (4.66%) developed corneal oedema with >10 Descemets folds and eight eyes (5.33%) had corneal oedema with < 10 descemets folds. However, both the uncorrected and corrected (4-11 weeks) visual acuities were better in the eyes that had phacoemulsification (p=0.02 and p=0.03), and there was less astigmatism (p=0.001) at 4-11 weeks in the eye that had phacoemulsification. Best-corrected visual acuity (BCVA) of final follow-up 95% (N=253) had a good outcome, borderline 3.10% (N=40) and poor outcome was 1.6% (N=7). The individual surgeon outcome were closer, 95% (BCVA) in SICS and 96% (BCVA) in Phacoemulsification at 4-11 weeks follow-up respectively. Conclusion: outcome of cataract surgery both Phacoemulsification and SICS in CEITC was more satisfactory according to who norms. Both Phacoemulsification and manual small-incision cataract surgery (SICS) shows excellent visual outcomes with low complication rates and good rehabilitation. Phacoemulsification is significantly faster, and modern technology based surgical procedure for cataract treatment.Keywords: phacoemulsification, SICS, cataract, Bangladesh, visual outcome of SICS
Procedia PDF Downloads 348131 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 86130 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company
Authors: Milica Lilic, Marina Rosales Martínez
Abstract:
Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.Keywords: good practice, knowledge transfer, a spin-off company, university
Procedia PDF Downloads 146129 Subjective Realities of Neoliberalized Social Media Natives: Trading Affect for Effect
Authors: Rory Austin Clark
Abstract:
This primary research represents an ongoing two year inductive mixed-methods project endeavouring to unravel the subjective reality of hyperconnected young adults in Western societies who have come of age with social media and smartphones. It is to be presented as well as analyzed and contextualized through a written master’s thesis as well as a documentary/mockumentary meshed with a Web 2.0 app providing the capacity for prosumer, 'audience 2.0' functionality. The media component seeks to explore not only thematic issues via real-life research interviews and fictional narrative but technical issues within the format relating to the quest for intimate, authentic connection as well as compelling dissemination of scholarly knowledge in an age of ubiquitous personalized daily digital media creation and consumption. The overarching hypothesis is that the aforementioned individuals process and make sense of their world, find shared meaning, and formulate notions-of-self in ways drastically different than pre-2007 via hyper-mediation-of-self and surroundings. In this pursuit, research questions have progressed from examining how young adult digital natives understand their use of social media to notions relating to the potential functionality of Web 2.0 for prosocial and altruistic engagement, on and offline, through the eyes of these individuals no longer understood as simply digital natives, but social media natives, and at the conclusion of that phase of research, as 'neoliberalized social media natives' (NSMN). This represents the two most potent macro factors in the paradigmatic shift in NSMS’s worldview, that they are not just children of social media, but of the palpable shift to neoliberal ways of thinking and being in the western socio-cultures since the 1980s, two phenomena that have a reflexive æffective relationship on their perception of figure and ground. This phase also resulted in the working hypothesis of 'social media comparison anxiety' and a nascent understanding of NSMN’s habitus and habitation in a subjective reality of fully converged online/offline worlds, where any phenomena originating in one realm in some way are, or at the very least can be, re-presented or have effect in the other—creating hyperreal reception. This might also be understood through a 'society as symbolic cyborg model', in which individuals have a 'digital essence'-- the entirety of online content that references a single person, as an auric living, breathing cathedral, museum, gallery, and archive of self of infinite permutations and rhizomatic entry and exit points.Keywords: affect, hyperreal, neoliberalism, postmodernism, social media native, subjective reality, Web 2.0
Procedia PDF Downloads 143128 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID
Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis
Abstract:
Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.Keywords: artificial intelligence, COVID, neural network, machine learning
Procedia PDF Downloads 93127 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos
Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling
Abstract:
Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.Keywords: boredom, engagement, music videos, posture, proxemics
Procedia PDF Downloads 167126 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 240125 Problem Based Learning and Teaching by Example in Dimensioning of Mechanisms: Feedback
Authors: Nicolas Peyret, Sylvain Courtois, Gaël Chevallier
Abstract:
This article outlines the development of the Project Based Learning (PBL) at the level of a last year’s Bachelor’s Degree. This form of pedagogy has for objective to allow a better involving of the students from the beginning of the module. The theoretical contributions are introduced during the project to solving a technological problem. The module in question is the module of mechanical dimensioning method of Supméca a French engineering school. This school issues a Master’s Degree. While the teaching methods used in primary and secondary education are frequently renewed in France at the instigation of teachers and inspectors, higher education remains relatively traditional in its practices. Recently, some colleagues have felt the need to put the application back at the heart of their theoretical teaching. This need is induced by the difficulty of covering all the knowledge deductively before its application. It is therefore tempting to make the students 'learn by doing', even if it doesn’t cover some parts of the theoretical knowledge. The other argument that supports this type of learning is the lack of motivation the students have for the magisterial courses. The role-play allowed scenarios favoring interaction between students and teachers… However, this pedagogical form known as 'pedagogy by project' is difficult to apply in the first years of university studies because of the low level of autonomy and individual responsibility that the students have. The question of what the student actually learns from the initial program as well as the evaluation of the competences acquired by the students in this type of pedagogy also remains an open problem. Thus we propose to add to the pedagogy by project format a regressive part of interventionism by the teacher based on pedagogy by example. This pedagogical scenario is based on the cognitive load theory and Bruner's constructivist theory. It has been built by relying on the six points of the encouragement process defined by Bruner, with a concrete objective, to allow the students to go beyond the basic skills of dimensioning and allow them to acquire the more global skills of engineering. The implementation of project-based teaching coupled with pedagogy by example makes it possible to compensate for the lack of experience and autonomy of first-year students, while at the same time involving them strongly in the first few minutes of the module. In this project, students have been confronted with the real dimensioning problems and are able to understand the links and influences between parameter variations and dimensioning, an objective that we did not reach in classical teaching. It is this form of pedagogy which allows to accelerate the mastery of basic skills and so spend more time on the engineer skills namely the convergence of each dimensioning in order to obtain a validated mechanism. A self-evaluation of the project skills acquired by the students will also be presented.Keywords: Bruner's constructivist theory, mechanisms dimensioning, pedagogy by example, problem based learning
Procedia PDF Downloads 190124 Flipped Classroom in a European Public Health Program: The Need for Students' Self-Directness
Authors: Nynke de Jong, Inge G. P. Duimel-Peeters
Abstract:
The flipped classroom as an instructional strategy and a type of blended learning that reverses the traditional learning environment by delivering instructional content, off- and online, in- and outside the classroom, has been implemented in a 4-weeks module focusing on ageing in Europe at the Maastricht University. The main aim regarding the organization of this module was implementing flipped classroom-principles in order to create meaningful learning opportunities, while educational technologies are used to deliver content outside of the classroom. Technologies used in this module were an online interactive real time lecture from England, two interactive face-to-face lectures with visual supports, one group session including role plays and team-based learning meetings. The cohort of 2015-2016, using educational technologies, was compared with the cohort of 2014-2015 on module evaluation such as organization and instructiveness of the module, who studied the same content, although conforming the problem-based educational strategy, i.e. educational base of the Maastricht University. The cohort of 2015-2016 with its specific organization, was also more profound evaluated on outcomes as (1) experienced duration of the lecture by students, (2) experienced content of the lecture, (3) experienced the extent of the interaction and (4) format of lecturing. It was important to know how students reflected on duration and content taken into account their background knowledge so far, in order to distinguish between sufficient enough regarding prior knowledge and therefore challenging or not fitting into the course. For the evaluation, a structured online questionnaire was used, whereby above mentioned topics were asked for to evaluate by scoring them on a 4-point Likert scale. At the end, there was room for narrative feedback so that interviewees could express more in detail, if they wanted, what they experienced as good or not regarding the content of the module and its organization parts. Eventually, the response rate of the evaluation was lower than expected (54%), however, due to written feedback and exam scores, we dare to state that it gives a good and reliable overview that encourages to work further on it. Probably, the response rate may be explained by the fact that resit students were included as well, and that there maybe is too much evaluation as some time points in the program. However, overall students were excited about the organization and content of the module, but the level of self-directed behavior, necessary for this kind of educational strategy, was too low. They need to be more trained in self-directness, therefore the module will be simplified in 2016-2017 with more clear and fewer topics and extra guidance (step by step procedure). More specific information regarding the used technologies will be explained at the congress, as well as the outcomes (min and max rankings, mean and standard deviation).Keywords: blended learning, flipped classroom, public health, self-directness
Procedia PDF Downloads 219123 Co-Creation of Content with the Students in Entrepreneurship Education to Capture Entrepreneurship Phenomenon in an Innovative Way
Authors: Prema Basargekar
Abstract:
Facilitating the subject ‘Entrepreneurship Education’ in higher education, such as management studies, can be exhilarating as well as challenging. It is a multi-disciplinary and ever-evolving subject. Capturing entrepreneurship as a phenomenon in a holistic manner is a daunting task as it requires covering various dimensions such as new ideas generation, entrepreneurial traits, business opportunities scanning, the role of policymakers, value creation, etc., to name a few. Implicit entrepreneurship theory and effectuation are two different theories that focus on engaging the participants to create content by using their own experiences, perceptions, and belief systems. It helps in understanding the phenomenon holistically. The assumption here is that all of us are part of the entrepreneurial ecosystem, and effective learning can come through active engagement and peer learning by all the participants together. The present study is an attempt to use these theories in the class assignment given to the students at the beginning of the course to build the course content and understand entrepreneurship as a phenomenon in a better way through peer learning. The assignment was given to three batches of MBA post-graduate students doing the program in one of the private business schools in India. The subject of ‘Entrepreneurship Management’ is facilitated in the third trimester of the first year. At the beginning of the course, the students were given the assignment to submit a brief write-up/ collage/picture/poem or in any other format about “What entrepreneurship means to you?” They were asked to give their candid opinions about entrepreneurship as a phenomenon as they perceive it. Nearly 156 students doing post-graduate MBA submitted the assignment. These assignments were further used to find answers to two research questions. – 1) Are students able to use divergent and innovative forms to express their opinions, such as poetry, illustrations, videos, etc.? 2) What are various dimensions of entrepreneurship which are emerging to understand the phenomenon in a better way? The study uses the Brawn and Clark framework of reflective thematic analysis for qualitative analysis. The study finds that students responded to this assignment enthusiastically and expressed their thoughts in multiple ways, such as poetry, illustration, personal narrative, videos, etc. The content analysis revealed that there could be seven dimensions to looking at entrepreneurship as a phenomenon. They are 1) entrepreneurial traits, 2) entrepreneurship as a journey, 3) value creation by entrepreneurs in terms of economic and social value, 4) entrepreneurial role models, 5) new business ideas and innovations, 6) personal entrepreneurial experiences and aspirations, and 7) entrepreneurial ecosystem. The study concludes that an implicit approach to facilitate entrepreneurship education helps in understanding it as a live phenomenon. It also encourages students to apply divergent and convergent thinking. It also helps in triggering new business ideas or stimulating the entrepreneurial aspirations of the students. The significance of the study lies in the application of implicit theories in the classroom to make higher education more engaging and effective.Keywords: co-creation of content, divergent thinking, entrepreneurship education, implicit theory
Procedia PDF Downloads 74122 Coping with Geological Hazards during Construction of Hydroelectric Projects in Himalaya
Authors: B. D. Patni, Ashwani Jain, Arindom Chakraborty
Abstract:
The world’s highest mountain range has been forming since the collision of Indian Plate with Asian Plate 40-50 million years ago. The Indian subcontinent has been deeper and deeper in to the rest of Asia resulting upliftment of Himalaya & Tibetan Plateau. The complex domain has become a major challenge for construction of hydro electric projects. The Himalayas are geologically complex & seismically active. Shifting of Indian Plate northwardly and increasing the amount of stresses in the fragile domain which leads to deformation in the form of several fold, faults and upliftment. It is difficult to undergo extensive geological investigation to ascertain the geological problems to be encountered during construction. Inaccessibility of the terrain, high rock cover, unpredictable ground water condition etc. are the main constraints. The hydroelectric projects located in Himalayas have faced many geological and geo-hydrological problems while construction of surface and subsurface works. Based on the experience, efforts have been made to identify the expected geological problems during and after construction of the projects. These have been classified into surface and subsurface problems which include existence of inhomogeneous deep overburden in the river bed or buried valley, abrupt change in bed rock profile, Occurrences of fault zones/shear zones/fractured rock in dam foundation and slope instability in the abutments. The tunneling difficulties are many such as squeezing ground condition, popping, rock bursting, high temperature gradient, heavy ingress of water, existence of shear seams/shear zones and emission of obnoxious gases. However, these problems were mitigated by adopting suitable remedial measures as per site requirement. The support system includes shotcrete, wire mesh, rock bolts, steel ribs, fore-poling, pre-grouting, pipe-roofing, MAI anchors, toe wall, retaining walls, reinforced concrete dowels, drainage drifts, anchorage cum drainage shafts, soil nails, concrete cladding and shear keys. Controlled drilling & blasting, heading & benching, proper drainage network and ventilation system are other remedial measures adopted to overcome such adverse situations. The paper highlights the geological uncertainties and its remedial measures in Himalaya, based on the analysis and evaluation of 20 hydroelectric projects during construction.Keywords: geological problems, shear seams, slope, drilling & blasting, shear zones
Procedia PDF Downloads 402121 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 68120 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada
Authors: S. Chowdhury, A. Corlett
Abstract:
Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation
Procedia PDF Downloads 131119 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 490118 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319117 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items
Authors: Wen-Chung Wang, Xue-Lan Qiu
Abstract:
Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison
Procedia PDF Downloads 246116 Evaluating the Knowledge and Skill of Final Year Pharmacy Students in Maternal and Child Health at a University in South Africa
Authors: E. O. Egieyeh, N. Butler, R. Coetzee, M. Van Huyssteen, A. Bheekie
Abstract:
Background: High rate of maternal and child mortality is a global concern. Nationally, it constitutes one of South Africa’s quadruple burdens of diseases. Pharmacists have a crucial role in maternal and child health care delivery and as such should be equipped with adequate knowledge and skill required to contribute to maternal and child well-being. The International Pharmaceutical Federation statement of policy (2013) outlines pharmacist-led interventions in accordance with the World Health Organisation’s interventions in maternal, new-born and child health care. The South African Pharmacy Council’s guideline on Good Pharmacy Practice (2010) also stipulates the minimum standards required to participate in reproductive, maternal and child care. Pharmacy schools are obliged to train pharmacy students to meet priority health needs of the population so that graduates are ‘fit for purpose’. The purpose of the study is to evaluate the knowledge and skill of final year pharmacy students at a university in South Africa to determine their preparedness to contribute effectively to maternal and child health care. Method: A quantitative, descriptive, non-randomized baseline study was conducted among the final year students at the School of Pharmacy. Data was collected using a questionnaire designed in sections to assess knowledge of contraception, maternal and child health directed at the primary care level and framed within the scope of practice required of an entry-level generalist pharmacist. Participants’ skill in infant growth assessment was assessed in a section of the questionnaire in a written format. Participants ticked the topics they had been exposed to on a curriculum content assessment tool which was not graded. A pilot study examined the clarity and suitability of question items, and duration to complete the questionnaire. A score of 50% in each section of the questionnaire indicated a pass. The questionnaire was delivered in campus lecture venue. Results: Of the 102 students in final year, 53 (52%) students consented to participate in the study. Only 13.2% of participants scored above 50% in each section. Forty five (85%) participants scored above 50% in the contraception section while 40 (75%) scored less than 50% in the skills assessment. Less than half (45.3%) of the participants had a total score above 50%. Being a parent or working part-time as pharmacist assistance did not have any influence on the performance of the participants. Evaluation of participants’ curriculum content exposure showed differences in exposure to the various topics. Exposure to contraception teaching received the most recognition. Conclusion: Maternal and child health curriculum content should be reviewed at the university to enhance the knowledge and skill of pharmacy graduates.Keywords: final year pharmacy students, knowledge and skill, maternal and child health, South Africa
Procedia PDF Downloads 152115 The Affordances and Challenges of Online Learning and Teaching for Secondary School Students
Authors: Hahido Samaras
Abstract:
In many cases, especially with the pandemic playing a major role in fast-tracking the growth of the digital industry, online learning has become a necessity or even a standard educational model nowadays, reliably overcoming barriers such as location, time and cost and frequently combined with a face-to-face format (e.g., in blended learning). This being the case, it is evident that students in many parts of the world, as well as their parents, will increasingly need to become aware of the pros and cons of online versus traditional courses. This fast-growing mode of learning, accelerated during the years of the pandemic, presents an abundance of exciting options especially matched for a large number of secondary school students in remote places of the world where access to stimulating educational settings and opportunities for a variety of learning alternatives are scarce, adding advantages such as flexibility, affordability, engagement, flow and personalization of the learning experience. However, online learning can also present several challenges, such as a lack of student motivation and social interactions in natural settings, digital literacy, and technical issues, to name a few. Therefore, educational researchers will need to conduct further studies focusing on the benefits and weaknesses of online learning vs. traditional learning, while instructional designers propose ways of enhancing student motivation and engagement in virtual environments. Similarly, teachers will be required to become more and more technology-capable, at the same time developing their knowledge about their students’ particular characteristics and needs so as to match them with the affordances the technology offers. And, of course, schools, education programs, and policymakers will have to invest in powerful tools and advanced courses for online instruction. By developing digital courses that incorporate intentional opportunities for community-building and interaction in the learning environment, as well as taking care to include built-in design principles and strategies that align learning outcomes with learning assignments, activities, and assessment practices, rewarding academic experiences can derive for all students. This paper raises various issues regarding the effectiveness of online learning on students by reviewing a large number of research studies related to the usefulness and impact of online learning following the COVID-19-induced digital education shift. It also discusses what students, teachers, decision-makers, and parents have reported about this mode of learning to date. Best practices are proposed for parties involved in the development of online learning materials, particularly for secondary school students, as there is a need for educators and developers to be increasingly concerned about the impact of virtual learning environments on student learning and wellbeing.Keywords: blended learning, online learning, secondary schools, virtual environments
Procedia PDF Downloads 100