Search results for: Visualization
75 Bringing German History to Tourists
Authors: Gudrun Görlitz, Christian Schölzel, Alexander Vollmar
Abstract:
Sites of Jewish Life in Berlin 1933-1945. Between Persecution and Self-assertion” was realized in a project funded by the European Regional Development Fund. A smartphone app, and a associated web site enable tourists and other participants of this educational offer to learn in a serious way more about the life of Jews in the German capital during the Nazi era. Texts, photos, video and audio recordings communicate the historical content. Interactive maps (both current and historical) make it possible to use predefined or self combined routes. One of the manifold challenges was to create a broad ranged guide, in which all detailed information are well linked with each other. This enables heterogeneous groups of potential users to find a wide range of specific information, corresponding with their particular wishes and interests. The multitude of potential ways to navigate through the diversified information causes (hopefully) the users to utilize app and web site for a second or third time and with a continued interest. Therefore 90 locations, a lot of them situated in Berlin’s city centre, have been chosen. For all of them text-, picture and/or audio/video material gives extensive information. Suggested combinations of several of these “site stories” are leading to the offer of detailed excursion routes. Events and biographies are also presented. A few of the implemented biographies are especially enriched with source material concerning the aspect of (forced) migration of these persons during the Nazi time. All this was done in a close and fruitful interdisciplinary cooperation of computer scientists and historians. The suggested conference paper aims to show the challenges shaping complex source material for practical use by different user-groups in a proper technical and didactic way. Based on the historical research in archives, museums, libraries and digital resources the quantitative dimension of the project can be sized as follows: The paper focuses on the following historiographical and technical aspects: - Shaping the text material didactically for the use in new media, especially a Smartphone-App running on differing platforms; - Geo-referencing of the sites on historical and current map material; - Overlay of old and new maps to present and find the sites; - Using Augmented Reality technologies to re-visualize destroyed buildings; - Visualization of black-/white-picture-material; - Presentation of historical footage and the resulting problems to need too much storage space; - Financial and juridical aspects in gaining copyrights to present archival material.Keywords: smartphone app, history, tourists, German
Procedia PDF Downloads 37374 Faculty Use of Geospatial Tools for Deep Learning in Science and Engineering Courses
Authors: Laura Rodriguez Amaya
Abstract:
Advances in science, technology, engineering, and mathematics (STEM) are viewed as important to countries’ national economies and their capacities to be competitive in the global economy. However, many countries experience low numbers of students entering these disciplines. To strengthen the professional STEM pipelines, it is important that students are retained in these disciplines at universities. Scholars agree that to retain students in universities’ STEM degrees, it is necessary that STEM course content shows the relevance of these academic fields to their daily lives. By increasing students’ understanding on the importance of these degrees and careers, students’ motivation to remain in these academic programs can also increase. An effective way to make STEM content relevant to students’ lives is the use of geospatial technologies and geovisualization in the classroom. The Geospatial Revolution, and the science and technology associated with it, has provided scientists and engineers with an incredible amount of data about Earth and Earth systems. This data can be used in the classroom to support instruction and make content relevant to all students. The purpose of this study was to find out the prevalence use of geospatial technologies and geovisualization as teaching practices in a USA university. The Teaching Practices Inventory survey, which is a modified version of the Carl Wieman Science Education Initiative Teaching Practices Inventory, was selected for the study. Faculty in the STEM disciplines that participated in a summer learning institute at a 4-year university in the USA constituted the population selected for the study. One of the summer learning institute’s main purpose was to have an impact on the teaching of STEM courses, particularly the teaching of gateway courses taken by many STEM majors. The sample population for the study is 97.5 of the total number of summer learning institute participants. Basic descriptive statistics through the Statistical Package for the Social Sciences (SPSS) were performed to find out: 1) The percentage of faculty using geospatial technologies and geovisualization; 2) Did the faculty associated department impact their use of geospatial tools?; and 3) Did the number of years in a teaching capacity impact their use of geospatial tools? Findings indicate that only 10 percent of respondents had used geospatial technologies, and 18 percent had used geospatial visualization. In addition, the use of geovisualization among faculty of different disciplines was broader than the use of geospatial technologies. The use of geospatial technologies concentrated in the engineering departments. Data seems to indicate the lack of incorporation of geospatial tools in STEM education. The use of geospatial tools is an effective way to engage students in deep STEM learning. Future research should look at the effect on student learning and retention in science and engineering programs when geospatial tools are used.Keywords: engineering education, geospatial technology, geovisualization, STEM
Procedia PDF Downloads 25073 Data Analysis Tool for Predicting Water Scarcity in Industry
Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse
Abstract:
Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.Keywords: data mining, industry, machine Learning, shortage, water resources
Procedia PDF Downloads 12172 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans
Authors: Rene Hellmuth
Abstract:
Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.Keywords: building information modeling, digital factory model, factory planning, restructuring
Procedia PDF Downloads 11271 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 8670 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 23169 Modeling Landscape Performance: Evaluating the Performance Benefits of the Olmsted Brothers’ Proposed Parkway Designs for Los Angeles
Authors: Aaron Liggett
Abstract:
This research focuses on the visionary proposal made by the Olmsted Brothers Landscape Architecture firm in the 1920s for a network of interconnected parkways in Los Angeles. Their envisioned parkways aimed to address environmental and cultural strains by providing green space for recreation, wildlife habitat, and stormwater management while serving as multimodal transportation routes. Although the parkways were never constructed, through an evidence-based approach, this research presents a framework for evaluating the potential functionality and success of the parkways by modeling and visualizing their quantitative and qualitative landscape performance and benefits. Historical documents and innovative digital modeling tools produce detailed analysis, modeling, and visualization of the parkway designs. A set of 1928 construction documents are used to analyze and interpret the design intent of the parkways. Grading plans are digitized in CAD and modeled in Sketchup to produce 3D visualizations of the parkway. Drainage plans are digitized to model stormwater performance. Planting plans are analyzed to model urban forestry and biodiversity. The EPA's Storm Water Management Model (SWMM) predicts runoff quantity and quality. The USDA Forests Service tools evaluate carbon sequestration and air quality. Spatial and overlay analysis techniques are employed to assess urban connectivity and the spatial impacts of the parkway designs. The study reveals how the integration of blue infrastructure, green infrastructure, and transportation infrastructure within the parkway design creates a multifunctional landscape capable of offering alternative spatial and temporal uses. The analysis demonstrates the potential for multiple functional, ecological, aesthetic, and social benefits to be derived from the proposed parkways. The analysis of the Olmsted Brothers' proposed Los Angeles parkways, which predated contemporary ecological design and resiliency practices, demonstrates the potential for providing multiple functional, ecological, aesthetic, and social benefits within urban designs. The findings highlight the importance of integrated blue, green, and transportation infrastructure in creating a multifunctional landscape that simultaneously serves multiple purposes. The research contributes new methods for modeling and visualizing landscape performance benefits, providing insights and techniques for informing future designs and sustainable development strategies.Keywords: landscape architecture, ecological urban design, greenway, landscape performance
Procedia PDF Downloads 12868 A Visualization Classification Method for Identifying the Decayed Citrus Fruit Infected by Fungi Based on Hyperspectral Imaging
Authors: Jiangbo Li, Wenqian Huang
Abstract:
Early detection of fungal infection in citrus fruit is one of the major problems in the postharvest commercialization process. The automatic and nondestructive detection of infected fruits is still a challenge for the citrus industry. At present, the visual inspection of rotten citrus fruits is commonly performed by workers through the ultraviolet induction fluorescence technology or manual sorting in citrus packinghouses to remove fruit subject with fungal infection. However, the former entails a number of problems because exposing people to this kind of lighting is potentially hazardous to human health, and the latter is very inefficient. Orange is used as a research object. This study would focus on this problem and proposed an effective method based on Vis-NIR hyperspectral imaging in the wavelength range of 400-1000 nm with a spectroscopic resolution of 2.8 nm. In this work, three normalization approaches are applied prior to analysis to reduce the effect of sample curvature on spectral profiles, and it is found that mean normalization was the most effective pretreatment for decreasing spectral variability due to curvature. Then, principal component analysis (PCA) was applied to a dataset composing of average spectra from decayed and normal tissue to reduce the dimensionality of data and observe the ability of Vis-NIR hyper-spectra to discriminate data from two classes. In this case, it was observed that normal and decayed spectra were separable along the resultant first principal component (PC1) axis. Subsequently, five wavelengths (band) centered at 577, 702, 751, 808, and 923 nm were selected as the characteristic wavelengths by analyzing the loadings of PC1. A multispectral combination image was generated based on five selected characteristic wavelength images. Based on the obtained multispectral combination image, the intensity slicing pseudocolor image processing method is used to generate a 2-D visual classification image that would enhance the contrast between normal and decayed tissue. Finally, an image segmentation algorithm for detection of decayed fruit was developed based on the pseudocolor image coupled with a simple thresholding method. For the investigated 238 independent set samples including infected fruits infected by Penicillium digitatum and normal fruits, the total success rate is 100% and 97.5%, respectively, and, the proposed algorithm also used to identify the orange infected by penicillium italicum with a 100% identification accuracy, indicating that the proposed multispectral algorithm here is an effective method and it is potential to be applied in citrus industry.Keywords: citrus fruit, early rotten, fungal infection, hyperspectral imaging
Procedia PDF Downloads 29867 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator
Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur
Abstract:
Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature
Procedia PDF Downloads 10866 An Exploration of Policy-related Documents on District Heating and Cooling in Flanders: a Slow and Bottom-up Process
Authors: Isaura Bonneux
Abstract:
District heating and cooling (DHC) is increasingly recognized as a viable path towards sustainable heating and cooling. While some countries like Sweden and Denmark have a longstanding tradition of DHC, Belgium is lacking behind. The Northern part of Belgium, Flanders, had only a total of 95 heating networks in July 2023. Nevertheless, it is increasingly exploring its possibilities to enhance the scope of DHC. DHC is a complex energy system, requiring a lot of collaboration between various stakeholders on various levels. Therefore, it is of interest to look closer at policy-related documents at the Flemish (regional) level, as these policies set the scene for DHC development in the Flemish region. This kind of analysis has not been undertaken so far. This paper has the following research question: “Who talks about DHC, and in which way and context is DHC discussed in Flemish policy-related documents?” To answer this question, the Overton policy database was used to search and retrieve relevant policy-related documents. Overton retrieves data from governments, think thanks, NGOs, and IGOs. In total, out of the 244 original results, 117 documents between 2009 and 2023 were analyzed. Every selected document included theme keywords, policymaking department(s), date, and document type. These elements were used for quantitative data description and visualization. Further, qualitative content analysis revealed patterns and main themes regarding DHC in Flanders. Four main conclusions can be drawn: First, it is obvious from the timeframe that DHC is a new topic in Flanders with still limited attention; 2014, 2016 and 2017 were the years with the most documents, yet this number is still only 12 documents. In addition, many documents talked about DHC but not much in depth and painted it as a future scenario with a lot of uncertainty around it. The largest part of the issuing government departments had a link to either energy or climate (e.g. Flemish Environmental Agency) or policy (e.g. Socio-Economic Council of Flanders) Second, DHC is mentioned most within an ‘Environment and Sustainability’ context, followed by ‘General Policy and Regulation’. This is intuitive, as DHC is perceived as a sustainable heating and cooling technique and this analysis compromises policy-related documents. Third, Flanders seems mostly interested in using waste or residual heat as a heating source for DHC. The harbors and waste incineration plants are identified as potential and promising supply sources. This approach tries to conciliate environmental and economic incentives. Last, local councils get assigned a central role and the initiative is mostly taken by them. The policy documents and policy advices demonstrate that Flanders opts for a bottom-up organization. As DHC is very dependent on local conditions, this seems a logic step. Nevertheless, this can impede smaller councils to create DHC networks and slow down systematic and fast implementation of DHC throughout Flanders.Keywords: district heating and cooling, flanders, overton database, policy analysis
Procedia PDF Downloads 4465 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 13364 Developing Confidence of Visual Literacy through Using MIRO during Online Learning
Authors: Rachel S. E. Lim, Winnie L. C. Tan
Abstract:
Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.Keywords: design education, graphic communication, online learning, visual literacy
Procedia PDF Downloads 11163 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine
Authors: Sahar Heidary
Abstract:
Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.Keywords: radiology, radiotherapy, medical imaging, cancer treatment
Procedia PDF Downloads 6862 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 34661 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.Keywords: mortality map, spatial patterns, statistical area, variation
Procedia PDF Downloads 25860 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 31159 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 44858 Origins: An Interpretive History of MMA Design Studio’s Exhibition for the 2023 Venice Biennale
Authors: Jonathan A. Noble
Abstract:
‘Origins’ is an exhibition designed and installed by MMA Design Studio, at the 2023 Venice Biennale. The instillation formed part of the ‘Dangerous Liaisons’ group exhibition at the Arsenale building. An immersive experience was created for those who visited, where video projection and the bodies of visitors interacted with the scene. Designed by South African architect, Mphethi Morojele – founder and owner of MMA – the primary inspiration for ‘Origins’ was the recent discovery by Professor Karim Sadr in 2019, of a substantial Tswana settlement. Situated in present day Suikerbosrand Nature Reserve, some 45km south of Johannesburg, this precolonial city named Kweneng, has been dated back to the fifteenth century. This remarkable discovery was achieved thanks to advanced aerial, LiDAR scanning technology, which was used to capture the traces of Kweneng, spanning a terrain of some 10km long and 2km wide. Discovered by light (LiDAR) and exhibited through light, Origins presents a simulated experience of Kweneng. The presentation of Kweneng was achieved primarily though video, with a circular projection onto the floor of an animated LiDAR data sequence, and onto the walls a filmed dance sequence choreographed to embody the architectural, spatial and symbolic significance of Kweneng. This paper documents the design process that was involved in the conceptualization, development and final realization of this noteworthy exhibition, with an elucidation upon key social and cultural questions pertaining to precolonial heritage, reimagined histories and postcolonial identity. Periods of change and of social awakening sometimes spark an interest in questions of origin, of cultural lineage and belonging – and which certainly is the case for contemporary, post-Apartheid South Africa. Researching this paper has required primary study of MMA Design Studio’s project archive, including various proposals and other design related documents, conceptual design sketches, architectural drawings and photographs. This material is supported by the authors first-hand interviews with Morejele and others who were involved, especially with respect to the choreography of the interpretive dance, LiDAR visualization techniques and video production that informed the simulated, immersive experience at the exhibition. Presenting a ‘dangerous liaison’ between architecture and dance, Origins looks into the distant past to frame contemporary questions pertaining to intangible heritage, animism and embodiment through architecture and dance – considerations which are required “to survive the future”, says Morojele.Keywords: architecture and dance, Kweneng, MMA design studio, origins, Venice Biennale
Procedia PDF Downloads 8657 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings
Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey
Abstract:
Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing
Procedia PDF Downloads 15056 CsPbBr₃@MOF-5-Based Single Drop Microextraction for in-situ Fluorescence Colorimetric Detection of Dechlorination Reaction
Authors: Yanxue Shang, Jingbin Zeng
Abstract:
Chlorobenzene homologues (CBHs) are a category of environmental pollutants that can not be ignored. They can stay in the environment for a long period and are potentially carcinogenic. The traditional degradation method of CBHs is dechlorination followed by sample preparation and analysis. This is not only time-consuming and laborious, but the detection and analysis processes are used in conjunction with large-scale instruments. Therefore, this can not achieve rapid and low-cost detection. Compared with traditional sensing methods, colorimetric sensing is simpler and more convenient. In recent years, chromaticity sensors based on fluorescence have attracted more and more attention. Compared with sensing methods based on changes in fluorescence intensity, changes in color gradients are easier to recognize by the naked eye. Accordingly, this work proposes to use single drop microextraction (SDME) technology to solve the above problems. After the dechlorination reaction was completed, the organic droplet extracts Cl⁻ and realizes fluorescence colorimetric sensing at the same time. This method was integrated sample processing and visual in-situ detection, simplifying the detection process. As a fluorescence colorimetric sensor material, CsPbBr₃ was encapsulated in MOF-5 to construct CsPbBr₃@MOF-5 fluorescence colorimetric composite. Then the fluorescence colorimetric sensor was constructed by dispersing the composite in SDME organic droplets. When the Br⁻ in CsPbBr₃ exchanges with Cl⁻ produced by the dechlorination reactions, it is converted into CsPbCl₃. The fluorescence color of the single droplet of SDME will change from green to blue emission, thereby realizing visual observation. Therein, SDME can enhance the concentration and enrichment of Cl⁻ and instead of sample pretreatment. The fluorescence color change of CsPbBr₃@MOF-5 can replace the detection process of large-scale instruments to achieve real-time rapid detection. Due to the absorption ability of MOF-5, it can not only improve the stability of CsPbBr₃, but induce the adsorption of Cl⁻. Simultaneously, accelerate the exchange of Br- and Cl⁻ in CsPbBr₃ and the detection process of Cl⁻. The absorption process was verified by density functional theory (DFT) calculations. This method exhibits exceptional linearity for Cl⁻ in the range of 10⁻² - 10⁻⁶ M (10000 μM - 1 μM) with a limit of detection of 10⁻⁷ M. Whereafter, the dechlorination reactions of different kinds of CBHs were also carried out with this method, and all had satisfactory detection ability. Also verified the accuracy by gas chromatography (GC), and it was found that the SDME we developed in this work had high credibility. In summary, the in-situ visualization method of dechlorination reaction detection was a combination of sample processing and fluorescence colorimetric sensing. Thus, the strategy researched herein represents a promising method for the visual detection of dechlorination reactions and can be extended for applications in environments, chemical industries, and foods.Keywords: chlorobenzene homologues, colorimetric sensor, metal halide perovskite, metal-organic frameworks, single drop microextraction
Procedia PDF Downloads 14255 Overcoming Reading Barriers in an Inclusive Mathematics Classroom with Linguistic and Visual Support
Authors: A. Noll, J. Roth, M. Scholz
Abstract:
The importance of written language in a democratic society is non-controversial. Students with physical, learning, cognitive or developmental disabilities often have difficulties in understanding information which is presented in written language only. These students suffer from obstacles in diverse domains. In order to reduce such barriers in educational as well as in out-of-school areas, access to written information must be facilitated. Readability can be enhanced by linguistic simplifications like the application of easy-to-read language. Easy-to-read language shall help people with disabilities to participate socially and politically in society. The authors state, for example, that only short simple words should be used, whereas the occurrence of complex sentences should be avoided. So far, these guidelines were not empirically proved. Another way to reduce reading barriers is the use of visual support, for example, symbols. A symbol conveys, in contrast to a photo, a single idea or concept. Little empirical data about the use of symbols to foster the readability of texts exist. Nevertheless, a positive influence can be assumed, e.g., because of the multimedia principle. It indicates that people learn better from words and pictures than from words alone. A qualitative Interview and Eye-Tracking-Study, which was conducted by the authors, gives cause for the assumption that besides the illustration of single words, the visualization of complete sentences may be helpful. Thus, the effect of photos, which illustrate the content of complete sentences, is also investigated in this study. This leads us to the main research question which was focused on: Does the use of easy-to-read language and/or enriching text with symbols or photos facilitate pupils’ comprehension of learning tasks? The sample consisted of students with learning difficulties (N = 144) and students without SEN (N = 159). The students worked on the tasks, which dealt with introducing fractions, individually. While experimental group 1 received a linguistically simplified version of the tasks, experimental group 2 worked with a variation which was linguistically simplified and furthermore, the keywords of the tasks were visualized by symbols. Experimental group 3 worked on exercises which were simplified by easy-to-read-language and the content of the whole sentences was illustrated by photos. Experimental group 4 received a not simplified version. The participants’ reading ability and their IQ was elevated beforehand to build four comparable groups. There is a significant effect of the different setting on the students’ results F(3,140) = 2,932; p = 0,036*. A post-hoc-analyses with multiple comparisons shows that this significance results from the difference between experimental group 3 and 4. The students in the group easy-to-read language plus photos worked on the exercises significantly more successfully than the students who worked in the group with no simplifications. Further results which refer, among others, to the influence of the students reading ability will be presented at the ICERI 2018.Keywords: inclusive education, mathematics education, easy-to-read language, photos, symbols, special educational needs
Procedia PDF Downloads 15154 Preliminary Design, Production and Characterization of a Coral and Alginate Composite for Bone Engineering
Authors: Sthephanie A. Colmenares, Fabio A. Rojas, Pablo A. Arbeláez, Johann F. Osma, Diana Narvaez
Abstract:
The loss of functional tissue is a ubiquitous and expensive health care problem, with very limited treatment options for these patients. The golden standard for large bone damage is a cadaveric bone as an allograft with stainless steel support; however, this solution only applies to bones with simple morphologies (long bones), has a limited material supply and presents long term problems regarding mechanical strength, integration, differentiation and induction of native bone tissue. Therefore, the fabrication of a scaffold with biological, physical and chemical properties similar to the human bone with a fabrication method for morphology manipulation is the focus of this investigation. Towards this goal, an alginate and coral matrix was created using two production techniques; the coral was chosen because of its chemical composition and the alginate due to its compatibility and mechanical properties. In order to construct the coral alginate scaffold the following methodology was employed; cleaning of the coral, its pulverization, scaffold fabrication and finally the mechanical and biological characterization. The experimental design had: mill method and proportion of alginate and coral, as the two factors, with two and three levels each, using 5 replicates. The coral was cleaned with sodium hypochlorite and hydrogen peroxide in an ultrasonic bath. Then, it was milled with both a horizontal and a ball mill in order to evaluate the morphology of the particles obtained. After this, using a combination of alginate and coral powder and water as a binder, scaffolds of 1cm3 were printed with a SpectrumTM Z510 3D printer. This resulted in solid cubes that were resistant to small compression stress. Then, using a ESQUIM DP-143 silicon mold, constructs used for the mechanical and biological assays were made. An INSTRON 2267® was implemented for the compression tests; the density and porosity were calculated with an analytical balance and the biological tests were performed using cell cultures with VERO fibroblast, and Scanning Electron Microscope (SEM) as visualization tool. The Young’s moduli were dependent of the pulverization method, the proportion of coral and alginate and the interaction between these factors. The maximum value was 5,4MPa for the 50/50 proportion of alginate and horizontally milled coral. The biological assay showed more extracellular matrix in the scaffolds consisting of more alginate and less coral. The density and porosity were proportional to the amount of coral in the powder mix. These results showed that this composite has potential as a biomaterial, but its behavior is elastic with a small Young’s Modulus, which leads to the conclusion that the application may not be for long bones but for tissues similar to cartilage.Keywords: alginate, biomaterial, bone engineering, coral, Porites asteroids, SEM
Procedia PDF Downloads 25353 A Quantitative Analysis of Rural to Urban Migration in Morocco
Authors: Donald Wright
Abstract:
The ultimate goal of this study is to reinvigorate the philosophical underpinnings the study of urbanization with scientific data with the goal of circumventing what seems an inevitable future clash between rural and urban populations. To that end urban infrastructure must be sustainable economically, politically and ecologically over the course of several generations as cities continue to grow with the incorporation of climate refugees. Our research will provide data concerning the projected increase in population over the coming two decades in Morocco, and the population will shift from rural areas to urban centers during that period of time. As a result, urban infrastructure will need to be adapted, developed or built to fit the demand of future internal migrations from rural to urban centers in Morocco. This paper will also examine how past experiences of internally displaced people give insight into the challenges faced by future migrants and, beyond the gathering of data, how people react to internal migration. This study employs four different sets of research tools. First, a large part of this study is archival, which involves compiling the relevant literature on the topic and its complex history. This step also includes gathering data bout migrations in Morocco from public data sources. Once the datasets are collected, the next part of the project involves populating the attribute fields and preprocessing the data to make it understandable and usable by machine learning algorithms. In tandem with the mathematical interpretation of data and projected migrations, this study benefits from a theoretical understanding of the critical apparatus existing around urban development of the 20th and 21st centuries that give us insight into past infrastructure development and the rationale behind it. Once the data is ready to be analyzed, different machine learning algorithms will be experimented (k-clustering, support vector regression, random forest analysis) and the results compared for visualization of the data. The final computational part of this study involves analyzing the data and determining what we can learn from it. This paper helps us to understand future trends of population movements within and between regions of North Africa, which will have an impact on various sectors such as urban development, food distribution and water purification, not to mention the creation of public policy in the countries of this region. One of the strengths of this project is the multi-pronged and cross-disciplinary methodology to the research question, which enables an interchange of knowledge and experiences to facilitate innovative solutions to this complex problem. Multiple and diverse intersecting viewpoints allow an exchange of methodological models that provide fresh and informed interpretations of otherwise objective data.Keywords: climate change, machine learning, migration, Morocco, urban development
Procedia PDF Downloads 14952 Real-Space Mapping of Surface Trap States in Cigse Nanocrystals Using 4D Electron Microscopy
Authors: Riya Bose, Ashok Bera, Manas R. Parida, Anirudhha Adhikari, Basamat S. Shaheen, Erkki Alarousu, Jingya Sun, Tom Wu, Osman M. Bakr, Omar F. Mohammed
Abstract:
This work reports visualization of charge carrier dynamics on the surface of copper indium gallium selenide (CIGSe) nanocrystals in real space and time using four-dimensional scanning ultrafast electron microscopy (4D S-UEM) and correlates it with the optoelectronic properties of the nanocrystals. The surface of the nanocrystals plays a key role in controlling their applicability for light emitting and light harvesting purposes. Typically for quaternary systems like CIGSe, which have many desirable attributes to be used for optoelectronic applications, relative abundance of surface trap states acting as non-radiative recombination centre for charge carriers remains as a major bottleneck preventing further advancements and commercial exploitation of these nanocrystals devices. Though ultrafast spectroscopic techniques allow determining the presence of picosecond carrier trapping channels, because of relative larger penetration depth of the laser beam, only information mainly from the bulk of the nanocrystals is obtained. Selective mapping of such ultrafast dynamical processes on the surfaces of nanocrystals remains as a key challenge, so far out of reach of purely optical probing time-resolved laser techniques. In S-UEM, the optical pulse generated from a femtosecond (fs) laser system is used to generate electron packets from the tip of the scanning electron microscope, instead of the continuous electron beam used in the conventional setup. This pulse is synchronized with another optical excitation pulse that initiates carrier dynamics in the sample. The principle of S-UEM is to detect the secondary electrons (SEs) generated in the sample, which is emitted from the first few nanometers of the top surface. Constructed at different time delays between the optical and electron pulses, these SE images give direct and precise information about the carrier dynamics on the surface of the material of interest. In this work, we report selective mapping of surface dynamics in real space and time of CIGSe nanocrystals applying 4D S-UEM. We show that the trap states can be considerably passivated by ZnS shelling of the nanocrystals, and the carrier dynamics can be significantly slowed down. We also compared and discussed the S-UEM kinetics with the carrier dynamics obtained from conventional ultrafast time-resolved techniques. Additionally, a direct effect of the state trap removal can be observed in the enhanced photoresponse of the nanocrystals after shelling. Direct observation of surface dynamics will not only provide a profound understanding of the photo-physical mechanisms on nanocrystals’ surfaces but also enable to unlock their full potential for light emitting and harvesting applications.Keywords: 4D scanning ultrafast microscopy, charge carrier dynamics, nanocrystals, optoelectronics, surface passivation, trap states
Procedia PDF Downloads 29351 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 6750 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy
Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther
Abstract:
Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.Keywords: concrete, damage assessment, harmful substances, LIBS
Procedia PDF Downloads 17549 Dys-Regulation of Immune and Inflammatory Response in in vitro Fertilization Implantation Failure Patients under Ovarian Stimulation
Authors: Amruta D. S. Pathare, Indira Hinduja, Kusum Zaveri
Abstract:
Implantation failure (IF) even after the good-quality embryo transfer (ET) in the physiologically normal endometrium is the main obstacle in in vitro fertilization (IVF). Various microarray studies have been performed worldwide to elucidate the genes requisite for endometrial receptivity. These studies have included the population based on different phases of menstrual cycle during natural cycle and stimulated cycle in normal fertile women. Additionally, the literature is also available in recurrent implantation failure patients versus oocyte donors in natural cycle. However, for the first time, we aim to study the genomics of endometrial receptivity in IF patients under controlled ovarian stimulation (COS) during which ET is generally practised in IVF. Endometrial gene expression profiling in IF patients (n=10) and oocyte donors (n=8) were compared during window of implantation under COS by whole genome microarray (using Illumina platform). Enrichment analysis of microarray data was performed to determine dys-regulated biological functions and pathways using Database for Annotation, Visualization and Integrated Discovery, v6.8 (DAVID). The enrichment mapping was performed with the help of Cytoscape software. Microarray results were validated by real-time PCR. Localization of genes related to immune response (Progestagen-Associated Endometrial Protein (PAEP), Leukaemia Inhibitory Factor (LIF), Interleukin-6 Signal Transducer (IL6ST) was detected by immunohistochemistry. The study revealed 418 genes downregulated and 519 genes upregulated in IF patients compared to healthy fertile controls. The gene ontology, pathway analysis and enrichment mapping revealed significant downregulation in activation and regulation of immune and inflammation response in IF patients under COS. The lower expression of Progestagen Associated Endometrial Protein (PAEP), Leukemia Inhibitory Factor (LIF) and Interleukin 6 Signal Transducer (IL6ST) in cases compared to controls by real time and immunohistochemistry suggests the functional importance of these genes. The study was proved useful to uncover the probable reason of implantation failure being imbalance of immune and inflammatory regulation in our group of subjects. Based on the present study findings, a panel of significant dysregulated genes related to immune and inflammatory pathways needs to be further substantiated in larger cohort in natural as well as stimulated cycle. Upon which these genes could be screened in IF patients during window of implantation (WOI) before going for embryo transfer or any other immunological treatment. This would help to estimate the regulation of specific immune response during WOI in a patient. The appropriate treatment of either activation of immune response or suppression of immune response can be then attempted in IF patients to enhance the receptivity of endometrium.Keywords: endometrial receptivity, immune and inflammatory response, gene expression microarray, window of implantation
Procedia PDF Downloads 15348 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 12747 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion
Authors: Ali Kadir, O. Anwar Beg
Abstract:
Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.Keywords: thermal coating, corrosion, ANSYS FEA, CFD
Procedia PDF Downloads 13446 Platform Virtual for Joint Amplitude Measurement Based in MEMS
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez
Abstract:
Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation
Procedia PDF Downloads 259