Search results for: ontology engineering methodology
7381 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala
Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.
Abstract:
During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture
Procedia PDF Downloads 1947380 Orbit Determination from Two Position Vectors Using Finite Difference Method
Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.
Abstract:
An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.Keywords: finite difference method, grid generation, NavIC system, orbit perturbation
Procedia PDF Downloads 847379 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach
Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas
Abstract:
Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality
Procedia PDF Downloads 1877378 Human Factors Integration of Chemical, Biological, Radiological and Nuclear Response: Systems and Technologies
Authors: Graham Hancox, Saydia Razak, Sue Hignett, Jo Barnes, Jyri Silmari, Florian Kading
Abstract:
In the event of a Chemical, Biological, Radiological and Nuclear (CBRN) incident rapidly gaining, situational awareness is of paramount importance and advanced technologies have an important role to play in improving detection, identification, monitoring (DIM) and patient tracking. Understanding how these advanced technologies can fit into current response systems is essential to ensure they are optimally designed, usable and meet end-users’ needs. For this reason, Human Factors (Ergonomics) methods have been used within an EU Horizon 2020 project (TOXI-Triage) to firstly describe (map) the hierarchical structure in a CBRN response with adapted Accident Map (AcciMap) methodology. Secondly, Hierarchical Task Analysis (HTA) has been used to describe and review the sequence of steps (sub-tasks) in a CBRN scenario response as a task system. HTA methodology was then used to map one advanced technology, ‘Tag and Trace’, which tags an element (people, sample and equipment) with a Near Field Communication (NFC) chip in the Hot Zone to allow tracing of (monitoring), for example casualty progress through the response. This HTA mapping of the Tag and Trace system showed how the provider envisaged the technology being used, allowing for review and fit with the current CBRN response systems. These methodologies have been found to be very effective in promoting and supporting a dialogue between end-users and technology providers. The Human Factors methods have given clear diagrammatic (visual) representations of how providers see their technology being used and how end users would actually use it in the field; allowing for a more user centered approach to the design process. For CBRN events usability is critical as sub-optimum design of technology could add to a responders’ workload in what is already a chaotic, ambiguous and safety critical environment.Keywords: AcciMap, CBRN, ergonomics, hierarchical task analysis, human factors
Procedia PDF Downloads 2227377 The Touristic Development of the Archaeological and Heritage Areas in Alexandria City, Egypt
Authors: Salma I. Dwidar, Amal A. Abdelsattar
Abstract:
Alexandria city is one of the greatest cities in the world. It confronted different civilizations throughout the ages due to its special geographical location and climate which left many archaeological areas of great heritage (Ptolemaic, Greek, Romanian, especially sunken monuments, Coptic, Islamic, and finally, the Modern). Also, Alexandria city contains areas with different patterns of urban planning, both Hellenistic and compacted planning which merited the diversity in planning. Despite the magnitude of this city, which contains all the elements of tourism, the city was not included in the tourism map of Egypt properly comparing with similar cities in Egypt. This paper discusses the importance of heritage areas in Alexandria and the relationship between heritage areas and modern buildings. It highlights the absence of a methodology to deal with heritage areas as touristic areas. Also, the paper aims to develop multiple touristic routes to visit archaeological areas and other sights of significance in Alexandria. The research methodology is divided into two main frameworks. The first framework is a historical study of the urban development of Alexandria and the most important remaining monuments throughout the ages, as well as an analytical study of sunken monuments and their importance in increasing the rate of tourism. Moreover, it covers a study of the importance of the Library of Alexandria and its effect on the international focus of the city. The second framework focuses on the proposal of some tourism routes to visit the heritage areas, archaeological monuments, sunken monuments and the sights of Alexandria. The study concludes with the proposal of three tourism routes. The first route, which is the longest one, passes by all the famous monuments of the city as well as its modern sights. The second route passes through the heritage areas, sunken monuments, and Library of Alexandria. The third route includes the sunken monuments and Library of Alexandria. These three tourism routes will ensures the touristic development of the city which leads to the economic growth of the city and the country.Keywords: archeological buildings, heritage buildings, heritage tourism, planning of Islamic cities
Procedia PDF Downloads 1427376 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research
Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova
Abstract:
The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research
Procedia PDF Downloads 3937375 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability
Authors: Rui Calejo Rodrigues
Abstract:
Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation
Procedia PDF Downloads 2057374 Using the Ecological Analysis Method to Justify the Environmental Feasibility of Biohydrogen Production from Cassava Wastewater Biogas
Authors: Jonni Guiller Madeira, Angel Sanchez Delgado, Ronney Mancebo Boloy
Abstract:
The use bioenergy, in recent years, has become a good alternative to reduce the emission of polluting gases. Several Brazilian and foreign companies are doing studies related to waste management as an essential tool in the search for energy efficiency, taking into consideration, also, the ecological aspect. Brazil is one of the largest cassava producers in the world; the cassava sub-products are the food base of millions of Brazilians. The repertoire of results about the ecological impact of the production, by steam reforming, of biohydrogen from cassava wastewater biogas is very limited because, in general, this commodity is more common in underdeveloped countries. This hydrogen, produced from cassava wastewater, appears as an alternative fuel to fossil fuels since this is a low-cost carbon source. This paper evaluates the environmental impact of biohydrogen production, by steam reforming, from cassava wastewater biogas. The ecological efficiency methodology developed by Cardu and Baica was used as a benchmark in this study. The methodology mainly assesses the emissions of equivalent carbon dioxide (CO₂, SOₓ, CH₄ and particulate matter). As a result, some environmental parameters, such as equivalent carbon dioxide emissions, pollutant indicator, and ecological efficiency are evaluated due to the fact that they are important to energy production. The average values of the environmental parameters among different biogas compositions (different concentrations of methane) were calculated, the average pollution indicator was 10.11 kgCO₂e/kgH₂ with an average ecological efficiency of 93.37%. As a conclusion, bioenergy production using biohydrogen from cassava wastewater treatment plant is a good option from the environmental feasibility point of view. This fact can be justified by the determination of environmental parameters and comparison of the environmental parameters of hydrogen production via steam reforming from different types of fuels.Keywords: biohydrogen, ecological efficiency, cassava, pollution indicator
Procedia PDF Downloads 1997373 Application of Systems Engineering Tools and Methods to Improve Healthcare Delivery Inside the Emergency Department of a Mid-Size Hospital
Authors: Mohamed Elshal, Hazim El-Mounayri, Omar El-Mounayri
Abstract:
Emergency department (ED) is considered as a complex system of interacting entities: patients, human resources, software and hardware systems, interfaces, and other systems. This paper represents a research for implementing a detailed Systems Engineering (SE) approach in a mid-size hospital in central Indiana. This methodology will be applied by “The Initiative for Product Lifecycle Innovation (IPLI)” institution at Indiana University to study and solve the crowding problem with the aim of increasing throughput of patients and enhance their treatment experience; therefore, the nature of crowding problem needs to be investigated with all other problems that leads to it. The presented SE methods are workflow analysis and systems modeling where SE tools such as Microsoft Visio are used to construct a group of system-level diagrams that demonstrate: patient’s workflow, documentation and communication flow, data systems, human resources workflow and requirements, leadership involved, and integration between ER different systems. Finally, the ultimate goal will be managing the process through implementation of an executable model using commercialized software tools, which will identify bottlenecks, improve documentation flow, and help make the process faster.Keywords: systems modeling, ED operation, workflow modeling, systems analysis
Procedia PDF Downloads 1817372 The Impact of Rising Architectural Façade in Improving Terms of the Physical Urban Ambience Inside the Free Space for Urban Fabric - the Street- Case Study the City of Biskra
Authors: Rami Qaoud, Alkama Djamal
Abstract:
When we ask about the impact of rising architectural façade in improving the terms physical urban ambiance inside the free space for urban fabric. Considered as bringing back life and culture values and civilization to these cities. And This will be the theme of this search. Where we have conducted the study about the relationship that connects the empty and full of in the urban fabric in terms of the density construction and the architectural elevation of its façade to street view. In this framework, we adopted in the methodology of this research the technical field experience. And according to three types of Street engineering(H≥2W, H=W, H≤0.5W). Where we conducted a field to raise the values of the physical ambiance according to three main axes of ambiance. The first axe 1 - Thermal ambiance. Where the temperature values were collected, relative humidity, wind speed, temperature of surfaces (the outer wall-ground). The second axe 2- Visual ambiance. Where we took the values of natural lighting levels during the daytime. The third axe 3- Acoustic ambiance . Where we take sound values during the entire day. That experience, which lasted for three consecutive days, and through six stations of measuring, where it has been one measuring station for each type of the street engineering and in two different way street. Through the obtained results and with the comparison of those values. We noticed the difference between this values and the three type of street engineering. Where the difference the calorific values of air equal 4 ° C , in terms of the visual ambiance the difference in the direct lighting natural periods amounted six hours between the three types of street engineering. As well in terms of sound ambience, registered a difference in values of up 15 (db) between the three types. This difference in values indicates The impact of rising architectural façade in improving the physical urban ambiance within the free field - street- for urban fabric.Keywords: street, physical urban ambience, rising architectural façade, urban fabric
Procedia PDF Downloads 2897371 Investigation of a Natural Convection Heat Sink for LEDs Based on Micro Heat Pipe Array-Rectangular Channel
Authors: Wei Wang, Yaohua Zhao, Yanhua Diao
Abstract:
The exponential growth of the lighting industry has rendered traditional thermal technologies inadequate for addressing the thermal management challenges inherent to high-power light-emitting diode (LED) technology. To enhance the thermal management of LEDs, this study proposes a heat sink configuration that integrates a miniature heat pipe array based on phase change technology with rectangular channels. The thermal performance of the heat sink was evaluated through experimental testing, and the results demonstrated that when the input power was 100W, 150W, and 200W, the temperatures of the LED substrate were 47.64℃, 56.78℃, and 69.06℃, respectively. Additionally, the maximum temperature difference of the MHPA in the vertical direction was observed to be 0.32℃, 0.30℃, and 0.30℃, respectively. The results demonstrate that the heat sink not only effectively dissipates the heat generated by the LEDs, but also exhibits excellent temperature uniformity. In consideration of the experimental measurement outcomes, a corresponding numerical model was developed as part of this study. Following the model validation, the effect of the structural parameters of the heat sink on its heat dissipation efficacy was examined through the use of response surface methodology (RSM) analysis. The rectangular channel width, channel height, channel length, number of channel cross-sections, and channel cross-section spacing were selected as the input parameters, while the LED substrate temperature and the total mass of the heat sink were regarded as the response variables. Subsequently, the response was subjected to an analysis of variance (ANOVA), which yielded a regression model that predicted the response based on the input variables. This offers some direction for the design of the radiator.Keywords: light-emitting diodes, heat transfer, heat pipe, natural convection, response surface methodology
Procedia PDF Downloads 357370 Organizational Innovativeness: Motivation in Employee’s Innovative Work Behaviors
Authors: P. T. Ngan
Abstract:
Purpose: The study aims to answer the question what are motivational conditions that have great influences on employees’ innovative work behaviors by investigating the case of SATAMANKULMA/ Anya Productions Ky in Kuopio, Finland. Design/methodology: The main methodology utilized was the qualitative single case study research, analysis was conducted with an adapted thematic content analysis procedure, created from empirical material that was collected through interviews, observation and document review. Findings: The paper highlights the significance of combining relevant synergistic extrinsic and intrinsic motivations into the organizational motivation system. The findings show that intrinsic drives are essential for the initiation phases while extrinsic drives are more important for the implementation phases of innovative work behaviors. The study also offers the IDEA motivation model-interpersonal relationships & networks, development opportunities, economic constituent and application supports as an ideal tool to optimize business performance. Practical limitations/ implications: The research was only conducted from the perspective of SATAMANKULMA/Anya Productions Ky, with five interviews, a few observations and with several reviewed documents. However, further research is required to include other stakeholders, such as the customers, partner companies etc. Also the study does not offer statistical validity of the findings; an extensive case study or a qualitative multiple case study is suggested to compare the findings and provide information as to whether IDEA model relevant in other types of firms. Originality/value: Neither the innovation nor the human resource management field provides a detailed overview of specific motivational conditions might use to stimulate innovative work behaviors of individual employees. This paper fills that void.Keywords: employee innovative work behaviors, extrinsic motivation, intrinsic motivation, organizational innovativeness
Procedia PDF Downloads 2677369 Total Synthesis of Natural Cyclic Depsi Peptides by Convergent SPPS and Macrolactonization Strategy for Anti-Tb Activity
Authors: Katharigatta N. Venugopala, Fernando Albericio, Bander E. Al-Dhubiab, T. Govender
Abstract:
Recent years have witnessed a renaissance in the field of peptides that are obtained from various natural sources such as many bacteria, fungi, plants, seaweeds, vertebrates, invertebrates and have been reported for various pharmacological properties such as anti-TB, anticancer, antimalarial, anti-inflammatory, anti-HIV, antibacterial, antifungal, and antidiabetic, activities. In view of the pharmacological significance of natural peptides, serious research efforts of many scientific groups and pharmaceutical companies have consequently focused on them to explore the possibility of developing their potential analogues as therapeutic agents. Solid phase and solution phase peptide synthesis are the two methodologies currently available for the synthesis of natural or synthetic linear or cyclic depsi-peptides. From a synthetic point of view, there is no doubt that the solid-phase methodology gained added advantages over solution phase methodology in terms of simplicity, purity of the compound and the speed with which peptides can be synthesised. In the present study total synthesis, purification and structural elucidation of analogues of natural anti-TB cyclic depsi-peptides such as depsidomycin, massetolides and viscosin has been attempted by solid phase method using standard Fmoc protocols and finally off resin cyclization in solution phase method. In case of depsidomycin, synthesis of linear peptide on solid phase could not be achieved because of two turn inducing amino acids in the peptide sequence, but total synthesis was achieved by convergent solid phase peptide synthesis followed by cyclization in solution phase method. The title compounds obtained were in good yields and characterized by NMR and HRMS. Anti-TB results revealed that the potential title compound exhibited promising activity at 4 µg/mL against H37Rv and 16 µg/mL against MDR strains of tuberculosis.Keywords: total synthesis, cyclic depsi-peptides, anti-TB activity, tuberculosis
Procedia PDF Downloads 6237368 Real-Time Generative Architecture for Mesh and Texture
Abstract:
In the evolving landscape of physics-based machine learning (PBML), particularly within fluid dynamics and its applications in electromechanical engineering, robot vision, and robot learning, achieving precision and alignment with researchers' specific needs presents a formidable challenge. In response, this work proposes a methodology that integrates neural transformation with a modified smoothed particle hydrodynamics model for generating transformed 3D fluid simulations. This approach is useful for nanoscale science, where the unique and complex behaviors of viscoelastic medium demand accurate neurally-transformed simulations for materials understanding and manipulation. In electromechanical engineering, the method enhances the design and functionality of fluid-operated systems, particularly microfluidic devices, contributing to advancements in nanomaterial design, drug delivery systems, and more. The proposed approach also aligns with the principles of PBML, offering advantages such as multi-fluid stylization and consistent particle attribute transfer. This capability is valuable in various fields where the interaction of multiple fluid components is significant. Moreover, the application of neurally-transformed hydrodynamical models extends to manufacturing processes, such as the production of microelectromechanical systems, enhancing efficiency and cost-effectiveness. The system's ability to perform neural transfer on 3D fluid scenes using a deep learning algorithm alongside physical models further adds a layer of flexibility, allowing researchers to tailor simulations to specific needs across scientific and engineering disciplines.Keywords: physics-based machine learning, robot vision, robot learning, hydrodynamics
Procedia PDF Downloads 667367 Selection of Social and Sustainability Criteria for Public Investment Project Evaluation in Developing Countries
Authors: Pintip Vajarothai, Saad Al-Jibouri, Johannes I. M. Halman
Abstract:
Public investment projects are primarily aimed at achieving development strategies to increase national economies of scale and overall improvement in a country. However, experience shows that public projects, particularly in developing countries, struggle or fail to fulfill the immediate needs of local communities. In many cases, the reason for that is that projects are selected in a subjective manner and that a major part of the problem is related to the evaluation criteria and techniques used. The evaluation process is often based on a broad strategic economic effects rather than real benefits of projects to society or on the various needs from different levels (e.g. national, regional, local) and conditions (e.g. long-term and short-term requirements). In this paper, an extensive literature review of the types of criteria used in the past by various researchers in project evaluation and selection process is carried out and the effectiveness of such criteria and techniques is discussed. The paper proposes substitute social and project sustainability criteria to improve the conditions of local people and in particular the disadvantaged groups of the communities. Furthermore, it puts forward a way for modelling the interaction between the selected criteria and the achievement of the social goals of the affected community groups. The described work is part of developing a broader decision model for public investment project selection by integrating various aspects and techniques into a practical methodology. The paper uses Thailand as a case to review what and how the various evaluation techniques are currently used and how to improve the project evaluation and selection process related to social and sustainability issues in the country. The paper also uses an example to demonstrates how to test the feasibility of various criteria and how to model the interaction between projects and communities. The proposed model could be applied to other developing and developed countries in the project evaluation and selection process to improve its effectiveness in the long run.Keywords: evaluation criteria, developing countries, public investment, project selection methodology
Procedia PDF Downloads 2767366 Standardization of a Methodology for Quantification of Antimicrobials Used for the Treatment of Multi-Resistant Bacteria Using Two Types of Biosensors and Production of Anti-Antimicrobial Antibodies
Authors: Garzon V., Bustos R., Salvador J. P., Marco M. P., Pinacho D. G.
Abstract:
Bacterial resistance to antimicrobial treatment has increased significantly in recent years, making it a public health problem. Large numbers of bacteria are resistant to all or nearly all known antimicrobials, creating the need for the development of new types of antimicrobials or the use of “last line” antimicrobial drug therapies for the treatment of multi-resistant bacteria. Some of the chemical groups of antimicrobials most used for the treatment of infections caused by multiresistant bacteria in the clinic are Glycopeptide (Vancomycin), Polymyxin (Colistin), Lipopeptide (Daptomycin) and Carbapenem (Meropenem). Molecules that require therapeutic drug monitoring (TDM). Due to the above, a methodology based on nanobiotechnology based on an optical and electrochemical biosensor is being developed, which allows the evaluation of the plasmatic levels of some antimicrobials such as glycopeptide, polymyxin, lipopeptide and carbapenem quickly, at a low cost, with a high specificity and sensitivity and that can be implemented in the future in public and private health hospitals. For this, the project was divided into five steps i) Design of specific anti-drug antibodies, produced in rabbits for each of the types of antimicrobials, evaluating the results by means of an immunoassay analysis (ELISA); ii) quantification by means of an electrochemical biosensor that allows quantification with high sensitivity and selectivity of the reference antimicrobials; iii) Comparison of antimicrobial quantification with an optical type biosensor; iv) Validation of the methodologies used with biosensor by means of an immunoassay. Finding as a result that it is possible to quantify antibiotics by means of the optical and electrochemical biosensor at concentrations on average of 1,000ng/mL, the antibodies being sensitive and specific for each of the antibiotic molecules, results that were compared with immunoassays and HPLC chromatography. Thus, contributing to the safe use of these drugs commonly used in clinical practice and new antimicrobial drugs.Keywords: antibiotics, electrochemical biosensor, optical biosensor, therapeutic drug monitoring
Procedia PDF Downloads 827365 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture
Authors: Alp Arda
Abstract:
Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.Keywords: black-hole, timeline, urbanism, space and time, speculative architecture
Procedia PDF Downloads 737364 A Mathematical Model to Select Shipbrokers
Authors: Y. Smirlis, G. Koronakos, S. Plitsos
Abstract:
Shipbrokers assist the ship companies in chartering or selling and buying vessels, acting as intermediates between them and the market. They facilitate deals, providing their expertise, negotiating skills, and knowledge about ship market bargains. Their role is very important as it affects the profitability and market position of a shipping company. Due to their significant contribution, the shipping companies have to employ systematic procedures to evaluate the shipbrokers’ services in order to select the best and, consequently, to achieve the best deals. Towards this, in this paper, we consider shipbrokers as financial service providers, and we formulate the problem of evaluating and selecting shipbrokers’ services as a multi-criteria decision making (MCDM) procedure. The proposed methodology comprises a first normalization step to adjust different scales and orientations of the criteria and a second step that includes the mathematical model to evaluate the performance of the shipbrokers’ services involved in the assessment. The criteria along which the shipbrokers are assessed may refer to their size and reputation, the potential efficiency of the services, the terms and conditions imposed, the expenses (e.g., commission – brokerage), the expected time to accomplish a chartering or selling/buying task, etc. and according to our modelling approach these criteria may be assigned different importance. The mathematical programming model performs a comparative assessment and estimates for the shipbrokers involved in the evaluation, a relative score that ranks the shipbrokers in terms of their potential performance. To illustrate the proposed methodology, we present a case study in which a shipping company evaluates and selects the most suitable among a number of sale and purchase (S&P) brokers. Acknowledgment: This study is supported by the OptiShip project, implemented within the framework of the National Recovery Plan and Resilience “Greece 2.0” and funded by the European Union – NextGenerationEU programme.Keywords: shipbrokers, multi-criteria decision making, mathematical programming, service-provider selection
Procedia PDF Downloads 887363 Tribal Food Security Assessment and Its Measurement Index: A Study of Tribes and Particularly Vulnerable Tribal Groups in Jharkhand, India
Authors: Ambika Prasad Gupta, Harshit Sosan Lakra
Abstract:
Food security is an important issue that has been widely discussed in literature. However, there is a lack of research on the specific food security challenges faced by tribal communities. Tribal food security refers to the ability of indigenous or tribal communities to consistently access and afford an adequate and nutritious supply of food. These communities often have unique cultural, social, and economic contexts that can impact their food security. The study aims to assess the food security status of all thirty-two major tribes, including Particularly Vulnerable Tribal Groups (PVTG) people living in various blocks of Jharkhand State. The methodology of this study focuses on measuring the food security index of indigenous people by developing and redefining a new Tribal Food Security Index (TFSI) as per the indigenous community-level indicators identified by the Global Food Security Index and other indicators relevant to food security. Affordability, availability, quality and safety, and natural resources were the dimensions used to calculate the overall Tribal Food Security Index. A survey was conducted for primary data collection of tribes and PVTGs at the household level in various districts of Jharkhand with a considerable tribal population. The result shows that due to the transition from rural to urban areas, there is a considerable change in TFSI and a decrease in forest dependency of tribal communities. Socioeconomic factors like occupation and household size had a significant correlation with TFSI. Tribal households living in forests have a higher food security index than tribal households residing in urban transition areas. The study also shows that alternative methodology adopted to measure specific community-level food security creates high significant impact than using commonly used indices.Keywords: indigenous people, tribal food security, particularly vulnerable tribal groups, Jharkhand
Procedia PDF Downloads 817362 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 3607361 Effect of Environmental Parameters on the Water Solubility of the Polycyclic Aromatic Hydrocarbons and Derivatives using Taguchi Experimental Design Methodology
Authors: Pranudda Pimsee, Caroline Sablayrolles, Pascale De Caro, Julien Guyomarch, Nicolas Lesage, Mireille Montréjaud-Vignoles
Abstract:
The MIGR’HYCAR research project was initiated to provide decisional tools for risks connected to oil spill drifts in continental waters. These tools aim to serve in the decision-making process once oil spill pollution occurs and/or as reference tools to study scenarios of potential impacts of pollutions on a given site. This paper focuses on the study of the distribution of polycyclic aromatic hydrocarbons (PAHs) and derivatives from oil spill in water as function of environmental parameters. Eight petroleum oils covering a representative range of commercially available products were tested. 41 Polycyclic Aromatic Hydrocarbons (PAHs) and derivate, among them 16 EPA priority pollutants were studied by dynamic tests at laboratory scale. The chemical profile of the water soluble fraction was different from the parent oil profile due to the various water solubility of oil components. Semi-volatile compounds (naphtalenes) constitute the major part of the water soluble fraction. A large variation in composition of the water soluble fraction was highlighted depending on oil type. Moreover, four environmental parameters (temperature, suspended solid quantity, salinity, and oil: water surface ratio) were investigated with the Taguchi experimental design methodology. The results showed that oils are divided into three groups: the solubility of Domestic fuel and Jet A1 presented a high sensitivity to parameters studied, meaning they must be taken into account. For gasoline (SP95-E10) and diesel fuel, a medium sensitivity to parameters was observed. In fact, the four others oils have shown low sensitivity to parameters studied. Finally, three parameters were found to be significant towards the water soluble fraction.Keywords: mornitoring, PAHs, water soluble fraction, SBSE, Taguchi experimental design
Procedia PDF Downloads 3257360 Quality Determinants of Client Satisfaction: A Case Study of ACE-Australian Consulting Engineers, Sydney, Australia
Authors: Elham S. Hasham, Anthony S. Hasham
Abstract:
The construction industry is one of Australia’s fastest growing industries and its success is a result of a firm’s client satisfaction with focus on product determinants such as price and quality. Ensuring quality at every phase is a must and building rapport with the client will go a long way. To capitalise on the growing demand for Engineering Consulting Firms (ECFs), we should “redefine the bottom line by allowing client satisfaction, high-quality standards, and profits to be the top priorities”. Consequently, the emphasis should be on improving employee skills through various training provisions. Clients seek consistency and thus expect that all services should be similar in respect to quality and the ability of the service to meet their needs. This calls for empowerment and comfortable work conditions to motivate employees and give them incentive to deliver quality and excellent output. The methodology utilized is triangulation-a combination of both quantitative and qualitative research. The case study-Australian Consulting Engineers (ACE) was established in 1995 and has operations throughout Australia, the Philippines, Europe, U.A.E., K.S.A., and Lebanon. ACE is affiliated with key agencies and support organizations in the engineering industry with International Organization for Standardization (ISO) certifications in Safety and Quality Management. The objective of this study is significant as it sheds light on employee motivation and client satisfaction as imperative determinants of the success of an organization.Keywords: leadership, motivation, organizational behavior, satisfaction
Procedia PDF Downloads 657359 Efficient Prediction of Surface Roughness Using Box Behnken Design
Authors: Ajay Kumar Sarathe, Abhinay Kumar
Abstract:
Production of quality products required for specific engineering applications is an important issue. The roughness of the surface plays an important role in the quality of the product by using appropriate machining parameters to eliminate wastage due to over machining. To increase the quality of the surface, the optimum machining parameter setting is crucial during the machining operation. The effect of key machining parameters- spindle speed, feed rate, and depth of cut on surface roughness has been evaluated. Experimental work was carried out using High Speed Steel tool and AlSI 1018 as workpiece material. In this study, the predictive model has been developed using Box-Behnken Design. An experimental investigation has been carried out for this work using BBD for three factors and observed that the predictive model of Ra value is closed to predictive value with a marginal error of 2.8648 %. Developed model establishes a correlation between selected key machining parameters that influence the surface roughness in a AISI 1018. FKeywords: ANOVA, BBD, optimisation, response surface methodology
Procedia PDF Downloads 1597358 Machine Learning Algorithms for Rocket Propulsion
Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo
Abstract:
In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion
Procedia PDF Downloads 1157357 Improving Security Features of Traditional Automated Teller Machines-Based Banking Services via Fingerprint Biometrics Scheme
Authors: Anthony I. Otuonye, Juliet N. Odii, Perpetual N. Ibe
Abstract:
The obvious challenges faced by most commercial bank customers while using the services of ATMs (Automated Teller Machines) across developing countries have triggered the need for an improved system with better security features. Current ATM systems are password-based, and research has proved the vulnerabilities of these systems to heinous attacks and manipulations. We have discovered by research that the security of current ATM-assisted banking services in most developing countries of the world is easily broken and maneuvered by fraudsters, majorly because it is quite difficult for these systems to identify an impostor with privileged access as against the authentic bank account owner. Again, PIN (Personal Identification Number) code passwords are easily guessed, just to mention a few of such obvious limitations of traditional ATM operations. In this research work also, we have developed a system of fingerprint biometrics with PIN code Authentication that seeks to improve the security features of traditional ATM installations as well as other Banking Services. The aim is to ensure better security at all ATM installations and raise the confidence of bank customers. It is hoped that our system will overcome most of the challenges of the current password-based ATM operation if properly applied. The researchers made use of the OOADM (Object-Oriented Analysis and Design Methodology), a software development methodology that assures proper system design using modern design diagrams. Implementation and coding were carried out using Visual Studio 2010 together with other software tools. Results obtained show a working system that provides two levels of security at the client’s side using a fingerprint biometric scheme combined with the existing 4-digit PIN code to guarantee the confidence of bank customers across developing countries.Keywords: fingerprint biometrics, banking operations, verification, ATMs, PIN code
Procedia PDF Downloads 427356 Modern Seismic Design Approach for Buildings with Hysteretic Dampers
Authors: Vanessa A. Segovia, Sonia E. Ruiz
Abstract:
The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) A main elastic structural frame designed for service loads and 2) A secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: A) The stiffness ratio (α=K_frame/K_(total system)), and B) The strength ratio (γ= V_damper / V_(total system)). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: Serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of α and γ. The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters.Keywords: damage-controlled buildings, direct displacement-based seismic design, optimal hysteretic energy dissipation systems, hysteretic dampers
Procedia PDF Downloads 4837355 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 877354 Simplified Measurement of Occupational Energy Expenditure
Authors: J. Wicks
Abstract:
Aim: To develop a simple methodology to allow collected heart rate (HR) data from inexpensive wearable devices to be expressed in a suitable format (METs) to quantitate occupational (and recreational) activity. Introduction: Assessment of occupational activity is commonly done by utilizing questionnaires in combination with prescribed MET levels of a vast range of previously measured activities. However for any individual the intensity of performing a specific activity can vary significantly. Ideally objective measurement of individual activity is preferred. Though there are a wide range of HR recording devices there is a distinct lack methodology to allow processing of collected data to quantitate energy expenditure (EE). The HR index equation expresses METs in relation to relative HR i.e. the ratio of activity HR to resting HR. The use of this equation provides a simple utility for objective measurement of EE. Methods: During a typical occupational work period of approximately 8 hours HR data was recorded using a Polar RS 400 wrist monitor. Recorded data was downloaded to a Windows PC and non HR data was stripped from the ASCII file using ‘Notepad’. The HR data was exported to a spread sheet program and sorted by HR range into a histogram format. Three HRs were determined, namely a resting HR (the HR delimiting the lowest 30 minutes of recorded data), a mean HR and a peak HR (the HR delimiting the highest 30 minutes of recorded data). HR indices were calculated (mean index equals mean HR/rest HR and peak index equals peak HR/rest HR) with mean and peak indices being converted to METs using the HR index equation. Conclusion: Inexpensive HR recording devices can be utilized to make reasonable estimates of occupational (or recreational) EE suitable for large scale demographic screening by utilizing the HR index equation. The intrinsic value of the HR index equation is that it is independent of factors that influence absolute HR, namely fitness, smoking and beta-blockade.Keywords: energy expenditure, heart rate histograms, heart rate index, occupational activity
Procedia PDF Downloads 2967353 The Effect of Electromagnetic Stirring during Solidification of Nickel Based Alloys
Authors: Ricardo Paiva, Rui Soares, Felix Harnau, Bruno Fragoso
Abstract:
Nickel-based alloys are materials well suited for service in extreme environments subjected to pressure and heat. Some industrial applications for Nickel-based alloys are aerospace and jet engines, oil and gas extraction, pollution control and waste processing, automotive and marine industry. It is generally recognized that grain refinement is an effective methodology to improve the quality of casted parts. Conventional grain refinement techniques involve the addition of inoculation substances, the control of solidification conditions, or thermomechanical treatment with recrystallization. However, such methods often lead to non-uniform grain size distribution and the formation of hard phases, which are detrimental to both wear performance and biocompatibility. Stirring of the melt by electromagnetic fields has been widely used in continuous castings with success for grain refinement, solute redistribution, and surface quality improvement. Despite the advantages, much attention has not been paid yet to the use of this approach on functional castings such as investment casting. Furthermore, the effect of electromagnetic stirring (EMS) fields on Nickel-based alloys is not known. In line with the gaps/needs of the state-of-art, the present research work targets to promote new advances in controlling grain size and morphology of investment cast Nickel based alloys. For such a purpose, a set of experimental tests was conducted. A high-frequency induction furnace with vacuum and controlled atmosphere was used to cast the Inconel 718 alloy in ceramic shells. A coil surrounded the casting chamber in order to induce electromagnetic stirring during solidification. Aiming to assess the effect of the electromagnetic stirring on Ni alloys, the samples were subjected to microstructural analysis and mechanical tests. The results show that electromagnetic stirring can be an effective methodology to modify the grain size and mechanical properties of investment-cast parts.Keywords: investment casting, grain refinement, electromagnetic stirring, nickel alloys
Procedia PDF Downloads 1337352 Analysis of the Discursive Dynamics of Preservice Physics Teachers in a Context of Curricular Innovation
Authors: M. A. Barros, M. V. Barros
Abstract:
The aim of this work is to analyze the discursive dynamics of preservice teachers during the implementation of a didactic sequence on topics of Quantum Mechanics for High School. Our research methodology was qualitative, case study type, in which we selected two prospective teachers on the Physics Teacher Training Course of the Sao Carlos Institute of Physics, at the University of Sao Paulo/Brazil. The set of modes of communication analyzed were the intentions and interventions of the teachers, the established communicative approach, the patterns and the contents of the interactions between teachers and students. Data were collected through video recording, interviews and questionnaires conducted before and after an 8 hour mini-course, which was offered to a group of 20 secondary students. As teaching strategy we used an active learning methodology, called: Peer Instruction. The episodes pointed out that both future teachers used interactive dialogic and authoritative communicative approaches to mediate the discussion between peers. In the interactive dialogic dimension the communication pattern was predominantly I-R-F (initiation-response-feedback), in which the future teachers assisted the students in the discussion by providing feedback to their initiations and contributing to the progress of the discussions between peers. Although the interactive dialogic dimension has been preferential during the use of the Peer Instruction method the authoritative communicative approach was also employed. In the authoritative dimension, future teachers used predominantly the type I-R-E (initiation-response-evaluation) communication pattern by asking the students several questions and leading them to the correct answer. Among the main implications the work contributes to the improvement of the practices of future teachers involved in applying active learning methodologies in classroom by identifying the types of communicative approaches and communication patterns used, as well as researches on curriculum innovation in physics in high school.Keywords: curricular innovation, high school, physics teaching, discursive dynamics
Procedia PDF Downloads 181