Search results for: cadestral mapping
377 Beyond Informality: Relocation from a Traditional Village 'Mit Oqbah' to Masaken El-Barageel and the Role of ‘Urf in Governing Built Environment, Egypt
Authors: Sarah Eldefrawi, Maike Didero
Abstract:
In Egypt, residents’ urban interventions (colloquially named A’hali’s interventions) are always tackled by government, scholars, and media as an encroachment (taeadiyat), chaotic (a’shwa’i) or informal (gheir mokanan) practices. This paper argues that those interventions cannot be simply described as an encroachment on public space or chaotic behaviour. We claim here that they are relevant to traditional governing methods (‘Urf) that were governing Arab cities for many decades. Through an in-depth field study conducted in a real estate public housing project in the city of Giza called 'Masaken El-Barageel', we traced the urban transformations demonstrated in private and public spaces. To understand those transformations, we used wide-range of qualitative research methods such as semi-guided and informal interviews, observations and mapping of the built environment and the newly added interventions. This study was as well strengthened through the contributions of the author in studying nine sectors emerging by Ahali in six districts in Great Cairo. The results of this study indicate that a culturally and socially sensitive framework has to be related to the individual actions toward the spatial and social structures as well as to culturally transmitted views and meanings connected with 'Urf'. The study could trace three crucial principals in ‘urf that influenced these interventions; the eliminating of harm (Al-Marafiq wa Man’ al-Darar), the appropriation of space (Haqq el-Intefa’) and public interest (maslaha a’ma). Our findings open the discussion for the (il) legitimate of a’hali governing methods in contemporary cities.Keywords: Urf, urban governance, public space, public housing, encroachments, chaotic, Egyptian cities
Procedia PDF Downloads 134376 A Global Perspective on Neuropsychology: The Multicultural Neuropsychological Scale
Authors: Tünde Tifordiána Simonyi, Tímea Harmath-Tánczos
Abstract:
The primary aim of the current research is to present the significance of a multicultural perspective in clinical neuropsychology and to present the test battery of the Multicultural Neuropsychological Scale (MUNS). The method includes the MUNS screening tool that involves stimuli common to most cultures in the world. The test battery measures general cognitive functioning focusing on five cognitive domains (memory, executive function, language, visual construction, and attention) tested with seven subtests that can be utilized within a wide age range (15-89), and lower and higher education participants. It is a scale that is sensitive to mild cognitive impairments. Our study presents the first results with the Hungarian translation of MUNS on a healthy sample. The education range was 4-25 years of schooling. The Hungarian sample was recruited by snowball sampling. Within the investigated population (N=151) the age curve follows an inverted U-shaped curve regarding cognitive performance with a high load on memory. Age, reading fluency, and years of education significantly influenced test scores. The sample was tested twice within a 14-49 days interval to determine test-retest reliability, which is satisfactory. Besides the findings of the study and the introduction of the test battery, the article also highlights its potential benefits for both research and clinical neuropsychological practice. The importance of adapting, validating and standardizing the test in other languages besides the Hungarian language context is also stressed. This test battery could serve as a helpful tool in mapping general cognitive functions in psychiatric and neurological disorders regardless of the cultural background of the patients.Keywords: general cognitive functioning, multicultural, MUNS, neuropsychological test battery
Procedia PDF Downloads 109375 A Literature Review on the Effect of Financial Knowledge toward Corporate Growth: The Important Role of Financial Risk Attitude
Authors: Risna Wijayanti, Sumiati, Hanif Iswari
Abstract:
This study aims to analyze the role of financial risk attitude as a mediation between financial knowledge and business growth. The ability of human resources in managing capital (financial literacy) can be a major milestone for a company's business to grow and build its competitive advantage. This study analyzed the important role of financial risk attitude in bringing about financial knowledge on corporate growth. There have been many discussions arguing that financial knowledge is one of the main abilities of corporate managers in determining the success of managing a company. However, a contrary argument of other scholars also enlightened that financial knowledge did not have a significant influence on corporate growth. This study used literatures' review to analyze whether there is another variable that can mediate the effect of financial knowledge toward corporate growth. Research mapping was conducted to analyze the concept of risk tolerance. This concept was related to people's risk aversion effects when making a decision under risk and the role of financial knowledge on changes in financial income. Understanding and managing risks and investments are complicated, in particular for corporate managers, who are always demanded to maintain their corporate growth. Substantial financial knowledge is extremely needed to identify and take accurate information for corporate financial decision-making. By reviewing several literature, this study hypothesized that financial knowledge of corporate managers would be meaningless without manager's courage to bear risks for taking favorable business opportunities. Therefore, the level of risk aversion from corporate managers will determine corporate action, which is a reflection of corporate-level investment behavior leading to attain corporate success or failure for achieving the company's expected growth rate.Keywords: financial knowledge, financial risk attitude, corporate growth, risk tolerance
Procedia PDF Downloads 129374 Analysis and Mapping of Climate and Spring Yield in Tanahun District, Nepal
Authors: Resham Lal Phuldel
Abstract:
This study based on a bilateral development cooperation project funded by the governments of Nepal and Finland. The first phase of the project has been completed in August 2012 and the phase II started in September 2013 and will end September 2018. The project strengthens the capacity of local governments in 14 districts to deliver services in water supply, sanitation and hygiene in Western development region and in Mid-Western development region of Nepal. In recent days, several spring sources have been dried out or slowly decreasing its yield across the country due to changing character of rainfall, increasing evaporative losses and some other manmade causes such as land use change, infrastructure development work etc. To sustain the hilly communities, the sources have to be able to provide sufficient water to serve the population, either on its own or in conjunction with other sources. Phase III have measured all water sources in Tanahu district in 2004 and sources were located with the GPS. Phase II has repeated the exercise to see changes in the district. 3320 water sources as identified in 2004 and altogether 4223 including new water sources were identified and measured in 2014. Between 2004 and 2014, 50% flow rate (yield) deduction of point sources’ average yield in 10 years is found. Similarly, 21.6% and 34% deductions of average yield were found in spring and stream water sources respectively. The rainfall from 2002 to 2013 shows erratic rainfalls in the district. The monsoon peak month is not consistent and the trend shows the decrease of annual rainfall 16.7 mm/year. Further, the temperature trend between 2002 and 2013 shows warming of + 0.0410C/year.Keywords: climate change, rainfall, source discharge, water sources
Procedia PDF Downloads 282373 Long Term Examination of the Profitability Estimation Focused on Benefits
Authors: Stephan Printz, Kristina Lahl, René Vossen, Sabina Jeschke
Abstract:
Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.Keywords: cost-benefit analysis, multi-criteria decision, profitability estimation focused on benefits, risk and uncertainty analysis
Procedia PDF Downloads 445372 Determination of Optimum Parameters for Thermal Stress Distribution in Composite Plate Containing a Triangular Cutout by Optimization Method
Authors: Mohammad Hossein Bayati Chaleshtari, Hadi Khoramishad
Abstract:
Minimizing the stress concentration around triangular cutout in infinite perforated plates subjected to a uniform heat flux induces thermal stresses is an important consideration in engineering design. Furthermore, understanding the effective parameters on stress concentration and proper selection of these parameters enables the designer to achieve a reliable design. In the analysis of thermal stress, the effective parameters on stress distribution around cutout include fiber angle, flux angle, bluntness and rotation angle of the cutout for orthotropic materials. This paper was tried to examine effect of these parameters on thermal stress analysis of infinite perforated plates with central triangular cutout. In order to achieve the least amount of thermal stress around a triangular cutout using a novel swarm intelligence optimization technique called dragonfly optimizer that inspired by the life method and hunting behavior of dragonfly in nature. In this study, using the two-dimensional thermoelastic theory and based on the Likhnitskiiʼ complex variable technique, the stress analysis of orthotropic infinite plate with a circular cutout under a uniform heat flux was developed to the plate containing a quasi-triangular cutout in thermal steady state condition. To achieve this goal, a conformal mapping function was used to map an infinite plate containing a quasi- triangular cutout into the outside of a unit circle. The plate is under uniform heat flux at infinity and Neumann boundary conditions and thermal-insulated condition at the edge of the cutout were considered.Keywords: infinite perforated plate, complex variable method, thermal stress, optimization method
Procedia PDF Downloads 147371 Evaluation of Railway Network and Service Performance Based on Transportation Sustainability in DKI Jakarta
Authors: Nur Bella Octoria Bella, Ayomi Dita Rarasati
Abstract:
DKI Jakarta is Indonesia's capital city with the 10th highest congestion rate in the world based on the 2019 traffic index. Other than that based on World Air Quality Report in 2019 showed DKI Jakarta's air pollutant concentrate 49.4 µg and the 5th highest air pollutant in the world. In the urban city nowadays, the mobility rate is high enough and the efficiency for sustainability assessment in transport infrastructure development is needed. This efficiency is the important key for sustainable infrastructure development. DKI Jakarta is nowadays in the process of constructing the railway infrastructure to support the transportation system. The problems appearing are the railway infrastructure networks and the service in DKI Jakarta already planned based on sustainability factors or not. Therefore, the aim of this research is to make the evaluation of railways infrastructure networks performance and services in DKI Jakarta regards on the railway sustainability key factors. Further, this evaluation will be used to make the railway sustainability assessment framework and to offer some of the alternative solutions to improve railway transportation sustainability in DKI Jakarta. Firstly a very detailed literature review of papers that have focused on railway sustainability factors and their improvements of railway sustainability, published in the scientific journal in the period 2011 until 2021. Regarding the sustainability factors from the literature review, further, it is used to assess the current condition of railway infrastructure in DKI Jakarta. The evaluation will be using a Likert rate questionnaire and directed to the transportation railway expert and the passenger. Furthermore, the mapping and evaluation rate based on the sustainability factors will be compared to the effect factors using the Analytical Hierarchical Process (AHP). This research offers the network's performance and service rate impact on the sustainability aspect and the passenger willingness for using the rail public transportation in DKI Jakarta.Keywords: transportation sustainability, railway transportation, sustainability, DKI Jakarta
Procedia PDF Downloads 163370 Using Visualization Techniques to Support Common Clinical Tasks in Clinical Documentation
Authors: Jonah Kenei, Elisha Opiyo
Abstract:
Electronic health records, as a repository of patient information, is nowadays the most commonly used technology to record, store and review patient clinical records and perform other clinical tasks. However, the accurate identification and retrieval of relevant information from clinical records is a difficult task due to the unstructured nature of clinical documents, characterized in particular by a lack of clear structure. Therefore, medical practice is facing a challenge thanks to the rapid growth of health information in electronic health records (EHRs), mostly in narrative text form. As a result, it's becoming important to effectively manage the growing amount of data for a single patient. As a result, there is currently a requirement to visualize electronic health records (EHRs) in a way that aids physicians in clinical tasks and medical decision-making. Leveraging text visualization techniques to unstructured clinical narrative texts is a new area of research that aims to provide better information extraction and retrieval to support clinical decision support in scenarios where data generated continues to grow. Clinical datasets in electronic health records (EHR) offer a lot of potential for training accurate statistical models to classify facets of information which can then be used to improve patient care and outcomes. However, in many clinical note datasets, the unstructured nature of clinical texts is a common problem. This paper examines the very issue of getting raw clinical texts and mapping them into meaningful structures that can support healthcare professionals utilizing narrative texts. Our work is the result of a collaborative design process that was aided by empirical data collected through formal usability testing.Keywords: classification, electronic health records, narrative texts, visualization
Procedia PDF Downloads 118369 The Design Method of Artificial Intelligence Learning Picture: A Case Study of DCAI's New Teaching
Authors: Weichen Chang
Abstract:
To create a guided teaching method for AI generative drawing design, this paper develops a set of teaching models for AI generative drawing (DCAI), which combines learning modes such as problem-solving, thematic inquiry, phenomenon-based, task-oriented, and DFC . Through the information security AI picture book learning guided programs and content, the application of participatory action research (PAR) and interview methods to explore the dual knowledge of Context and ChatGPT (DCAI) for AI to guide the development of students' AI learning skills. In the interviews, the students highlighted five main learning outcomes (self-study, critical thinking, knowledge generation, cognitive development, and presentation of work) as well as the challenges of implementing the model. Through the use of DCAI, students will enhance their consensus awareness of generative mapping analysis and group cooperation, and they will have knowledge that can enhance AI capabilities in DCAI inquiry and future life. From this paper, it is found that the conclusions are (1) The good use of DCAI can assist students in exploring the value of their knowledge through the power of stories and finding the meaning of knowledge communication; (2) Analyze the transformation power of the integrity and coherence of the story through the context so as to achieve the tension of ‘starting and ending’; (3) Use ChatGPT to extract inspiration, arrange story compositions, and make prompts that can communicate with people and convey emotions. Therefore, new knowledge construction methods will be one of the effective methods for AI learning in the face of artificial intelligence, providing new thinking and new expressions for interdisciplinary design and design education practice.Keywords: artificial intelligence, task-oriented, contextualization, design education
Procedia PDF Downloads 29368 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh
Authors: A. A. Sadia, A. Emdad, E. Hossain
Abstract:
The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application
Procedia PDF Downloads 68367 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 362366 Employing Visual Culture to Enhance Initial Adult Maltese Language Acquisition
Authors: Jacqueline Żammit
Abstract:
Recent research indicates that the utilization of right-brain strategies holds significant implications for the acquisition of language skills. Nevertheless, the utilization of visual culture as a means to stimulate these strategies and amplify language retention among adults engaging in second language (L2) learning remains a relatively unexplored area. This investigation delves into the impact of visual culture on activating right-brain processes during the initial stages of language acquisition, particularly in the context of teaching Maltese as a second language (ML2) to adult learners. By employing a qualitative research approach, this study convenes a focus group comprising twenty-seven educators to delve into a range of visual culture techniques integrated within language instruction. The collected data is subjected to thematic analysis using NVivo software. The findings underscore a variety of impactful visual culture techniques, encompassing activities such as drawing, sketching, interactive matching games, orthographic mapping, memory palace strategies, wordless picture books, picture-centered learning methodologies, infographics, Face Memory Game, Spot the Difference, Word Search Puzzles, the Hidden Object Game, educational videos, the Shadow Matching technique, Find the Differences exercises, and color-coded methodologies. These identified techniques hold potential for application within ML2 classes for adult learners. Consequently, this study not only provides insights into optimizing language learning through specific visual culture strategies but also furnishes practical recommendations for enhancing language competencies and skills.Keywords: visual culture, right-brain strategies, second language acquisition, maltese as a second language, visual aids, language-based activities
Procedia PDF Downloads 61365 Recession Rate of Gangotri and Its Tributary Glacier, Garhwal Himalaya, India through Kinematic GPS Survey and Satellite Data
Authors: Harish Bisht, Bahadur Singh Kotlia, Kireet Kumar
Abstract:
In order to reconstruct past retreating rates, total area loss, volume change and shift in snout position were measured through multi-temporal satellite data from 1989 to 2016 and kinematic GPS survey from 2015 to 2016. The results obtained from satellite data indicate that in the last 27 years, Chaturangi glacier snout has retreated 1172.57 ± 38.3 m (average 45.07 ± 4.31 m/year) with a total area and volume loss of 0.626 ± 0.001 sq. Km and 0.139 Km³, respectively. The field measurements through differential global positioning system survey revealed that the annual retreating rate was 22.84 ± 0.05 m/year. The large variations in results derived from both the methods are probably because of higher difference in their accuracy. Snout monitoring of the Gangotri glacier during the ablation season (May to September) in the years 2005 and 2015 reveals that the retreating rate has been comparatively more declined than that shown by the earlier studies. The GPS dataset shows that the average recession rate is 10.26 ± 0.05 m/year. In order to determine the possible causes of decreased retreating rate, a relationship between debris thickness and melt rate was also established by using ablation stakes. The present study concludes that remote sensing method is suitable for large area and long term study, while kinematic GPS is more appropriate for the annual monitoring of retreating rate of glacier snout. The present study also emphasizes on mapping of all the tributary glaciers in order to assess the overall changes in the main glacier system and its health.Keywords: Chaturangi glacier, Gangotri glacier, glacier snout, kinematic global positioning system, retreat rate
Procedia PDF Downloads 145364 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 94363 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 86362 MRCP as a Pre-Operative Tool for Predicting Variant Biliary Anatomy in Living Related Liver Donors
Authors: Awais Ahmed, Atif Rana, Haseeb Zia, Maham Jahangir, Rashed Nazir, Faisal Dar
Abstract:
Purpose: Biliary complications represent the most common cause of morbidity in living related liver donor transplantation and detailed preoperative evaluation of biliary anatomic variants is crucial for safe patient selection and improved surgical outcomes. Purpose of this study is to determine the accuracy of preoperative MRCP in predicting biliary variations when compared to intraoperative cholangiography in living related liver donors. Materials and Methods: From 44 potential donors, 40 consecutive living related liver donors (13 females and 28 males) underwent donor hepatectomy at our centre from April 2012 to August 2013. MRCP and IOC of all patients were retrospectively reviewed separately by two radiologists and a transplant surgeon.MRCP was performed on 1.5 Tesla MR magnets using breath-hold heavily T2 weighted radial slab technique. One patient was excluded due to suboptimal MRCP. The accuracy of MRCP for variant biliary anatomy was calculated. Results: MRCP accurately predicted the biliary anatomy in 38 of 39 cases (97 %). Standard biliary anatomy was predicted by MRCP in 25 (64 %) donors (100% sensitivity). Variant biliary anatomy was noted in 14 (36 %) IOCs of which MRCP predicted precise anatomy of 13 variants (93 % sensitivity). The two most common variations were drainage of the RPSD into the LHD (50%) and the triple confluence of the RASD, RPSD and LHD (21%). Conclusion: MRCP is a sensitive imaging tool for precise pre-operative mapping of biliary variations which is critical to surgical decision making in living related liver transplantation.Keywords: intraoperative cholangiogram, liver transplantation, living related donors, magnetic resonance cholangio-pancreaticogram (MRCP)
Procedia PDF Downloads 397361 Electrochemical APEX for Genotyping MYH7 Gene: A Low Cost Strategy for Minisequencing of Disease Causing Mutations
Authors: Ahmed M. Debela, Mayreli Ortiz , Ciara K. O´Sullivan
Abstract:
The completion of the human genome Project (HGP) has paved the way for mapping the diversity in the overall genome sequence which helps to understand the genetic causes of inherited diseases and susceptibility to drugs or environmental toxins. Arrayed primer extension (APEX) is a microarray based minisequencing strategy for screening disease causing mutations. It is derived from Sanger DNA sequencing and uses fluorescently dideoxynucleotides (ddNTPs) for termination of a growing DNA strand from a primer with its 3´- end designed immediately upstream of a site where single nucleotide polymorphism (SNP) occurs. The use of DNA polymerase offers a very high accuracy and specificity to APEX which in turn happens to be a method of choice for multiplex SNP detection. Coupling the high specificity of this method with the high sensitivity, low cost and compatibility for miniaturization of electrochemical techniques would offer an excellent platform for detection of mutation as well as sequencing of DNA templates. We are developing an electrochemical APEX for the analysis of SNPs found in the MYH7 gene for group of cardiomyopathy patients. ddNTPs were labeled with four different redox active compounds with four distinct potentials. Thiolated oligonucleotide probes were immobilised on gold and glassy carbon substrates which are followed by hybridisation with complementary target DNA just adjacent to the base to be extended by polymerase. Electrochemical interrogation was performed after the incorporation of the redox labelled dedioxynucleotide. The work involved the synthesis and characterisation of the redox labelled ddNTPs, optimisation and characterisation of surface functionalisation strategies and the nucleotide incorporation assays.Keywords: array based primer extension, labelled ddNTPs, electrochemical, mutations
Procedia PDF Downloads 246360 Impact of Map Generalization in Spatial Analysis
Authors: Lin Li, P. G. R. N. I. Pussella
Abstract:
When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.Keywords: generalization, GIS, scales, spatial analysis
Procedia PDF Downloads 328359 A Neuroscience-Based Learning Technique: Framework and Application to STEM
Authors: Dante J. Dorantes-González, Aldrin Balsa-Yepes
Abstract:
Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.Keywords: emotion, emotion-enhanced memory, learning technique, STEM
Procedia PDF Downloads 91358 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics
Authors: A. Abbas, X. Tridon, J. Michelon
Abstract:
In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film
Procedia PDF Downloads 159357 Land Use Planning Tool to Achieve Land Degradation Neutrality: Tunisia Case Study
Authors: Rafla Attia, Claudio Zucca, Bao Quang Le, Sana Dridi, Thouraya Sahli, Taoufik Hermassi
Abstract:
In Tunisia, landscape change and land degradation are critical issues for landscape conservation, management, and planning. Landscapes are undergoing crucial environmental problems made evident by soil degradation and desertification. Human improper uses of land resources (e.g., unsuitable land uses, unsustainable crop intensification, and poor rangeland management) and climate change are the main factors leading to the landscape transformation and desertification affecting high proportions of the Tunisian lands. Land use planning (LUP) to achieve Land Degradation Neutrality (LDN) must be supported by methodologies and technologies that help identify best solutions and practices and design context-specific sustainable land management (SLM) strategies. Such strategies must include restoration or rehabilitation efforts in areas with high land degradation, as well as prevention of degradation that could be caused by improper land use (LU) and land management (LM). The geoinformatics Land Use Planning for LDN (LUP4LDN) tool has been designed for this purpose. Its aim is to support national and sub-national planners in i) mapping geographic patterns of current land degradation; ii) anticipating further future land degradation expected in areas that are unsustainably managed; and iii) providing an interactive procedure for developing participatory LU-LM transitional scenarios over selected regions of interest and timeframes, visualizing the related expected levels of impacts on ecosystem services via maps and graphs. The tool has been co-developed and piloted with national stakeholders in Tunisia. The piloting implementation assessed how the LUP4LDN tool fits with existing LUP processes and the benefits achieved by using the tool to support land use planning for LDN.Keywords: land use system, land cover, sustainable land management, land use planning for land degradation neutrality
Procedia PDF Downloads 77356 Use of Concept Maps as a Tool for Evaluating Students' Understanding of Science
Authors: Aregamalage Sujeewa Vijayanthi Polgampala, Fang Huang
Abstract:
This study explores the genesis and development of concept mapping as a useful tool for science education and its effectiveness as technique for teaching and learning and evaluation for secondary science in schools and the role played by National College of Education science teachers. Concept maps, when carefully employed and executed serves as an integral part of teaching method and measure of effectiveness of teaching and tool for evaluation. Research has shown that science concept maps can have positive influence on student learning and motivation. The success of concept maps played in an instruction class depends on the type of theme selected, the development of learning outcomes, and the flexibility of instruction in providing library unit that is equipped with multimedia equipment where learners can interact. The study was restricted to 6 male and 9 female respondents' teachers in third-year internship pre service science teachers in Gampaha district Sri Lanka. Data were collected through 15 item questionnaire provided to learners and in depth interviews and class observations of 18 science classes. The two generated hypotheses for the study were rejected, while the results revealed that significant difference exists between factors influencing teachers' choice of concept maps, its usefulness and problems hindering the effectiveness of concept maps for teaching and learning process of secondary science in schools. It was examined that concept maps can be used as an effective measure to evaluate students understanding of concepts and misconceptions. Even the teacher trainees could not identify, key concept is on top, and subordinate concepts fall below. It is recommended that pre service science teacher trainees should be provided a thorough training using it as an evaluation instrument.Keywords: concept maps, evaluation, learning science, misconceptions
Procedia PDF Downloads 274355 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling
Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta
Abstract:
In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity
Procedia PDF Downloads 129354 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 143353 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.Keywords: blueprint, ERP, modular, normalized
Procedia PDF Downloads 139352 Tectonics of Out-of-Sequence Thrusting in Higher Himalaya- Example from Jhakri-Chaura-Sarahan Region, Himachal Pradesh
Authors: Rajkumar Ghosh
Abstract:
The Out-of-Sequence Thrust (OOST) is a common phenomenon in collisional tectonic settings like the Himalayas. These OOSTs are activated in different locations at different time frames. These OOST are linked with the multiple Himalayan Thrusts. Apart from minimal documentation in geological mapping for OOST, there exists a lack of field data to establish OOST in the field. This work has considered three thrusts from NW Himalaya in Himachal Pradesh with published data from other sources, allowing a re-examination for correlation of OOST. For the Sutlej section, the approach has been to do fieldwork and microstructural studies. The information related to the cross-cut signature of S/C- and relative time relation could help to predict the nature of OOST. The activation timing, along with the basis of identification of OOST in Higher Himalayan, was documented in various literature. Compilation of the Grain Boundary Migration (GBM) associated temperature range (400–750 °C) was documented from microstructural studies along the Jhakri-Chaura section. No such significant temperature variation across thrusts was observed. Strain variation paths using S Ʌ C angle measurement were carried out along the Jeori-Wangtu transect to distinguish overprinting structures for OOSTs. Near the Chaura Thrust (CT), angular variation of S Ʌ C was documented, and it varies within a range of 15° - 28 °. Along the NH22 (National Highway, 22), all tectonic units of the orogen are exposed in NW Himalaya, INDIA. But there are inherent difficulties in finding field evidence of OOST, largely due to the lack of adequate surface morphology, including topography and drainage pattern.Keywords: out-of-sequence thrust (OOST), main central thrust (MCT), south tibetan detachment system (STDS), jhakri thrust (JT), sarahan thrust (ST), chaura thrust (CT), higher himalaya (HH), greater himalayan crystalline (GHC)
Procedia PDF Downloads 84351 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation
Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen
Abstract:
Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.Keywords: fire safety, evacuation, decision-making, at-risk groups
Procedia PDF Downloads 105350 Exploring SL Writing and SL Sensitivity during Writing Tasks: Poor and Advanced Writing in a Context of Second Language other than English
Authors: Sandra Figueiredo, Margarida Alves Martins, Carlos Silva, Cristina Simões
Abstract:
This study integrates a larger research empirical project that examines second language (SL) learners’ profiles and valid procedures to perform complete and diagnostic assessment in schools. 102 learners of Portuguese as a SL aged 7 and 17 years speakers of distinct home languages were assessed in several linguistic tasks. In this article, we focused on writing performance in the specific task of narrative essay composition. The written outputs were measured using the score in six components adapted from an English SL assessment context (Alberta Education): linguistic vocabulary, grammar, syntax, strategy, socio-linguistic, and discourse. The writing processes and strategies in Portuguese language used by different immigrant students were analysed to determine features and diversity of deficits on authentic texts performed by SL writers. Differentiated performance was based on the diversity of the following variables: grades, previous schooling, home language, instruction in first language, and exposure to Portuguese as Second Language. Indo-Aryan languages speakers showed low writing scores compared to their peers and the type of language and respective cognitive mapping (such as Mandarin and Arabic) was the predictor, not linguistic distance. Home language instruction should also be prominently considered in further research to understand specificities of cognitive academic profile in a Romance languages learning context. Additionally, this study also examined the teachers representations that will be here addressed to understand educational implications of second language teaching in psychological distress of different minorities in schools of specific host countries.Keywords: home language, immigrant students, Portuguese language, second language, writing assessment
Procedia PDF Downloads 462349 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties
Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda
Abstract:
This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties
Procedia PDF Downloads 66348 Police Violence, Activism, and the Changing Rural United States: A Digital History and Mapping Narrative
Authors: Joel Zapata
Abstract:
Chicana/o Activism in the Southern Plains Through Time and Space, a digital history project available at PlainsMovement.com, helps reveal an understudied portion of the Chicana/o Civil Rights Movement: the way it unfolded on the Southern Plains. The project centers around an approachable interactive map and timeline along with a curated collection of materials. Therefore, the project provides a digital museum experience that has not emerged within the region’s museums. That is, this digital history project takes scholarly research to the wider public, making it is also a publicly facing history project. In this way, the project adds to both scholarly and socially significant conversations, showing that the region was home to a burgeoning wing of the Chicana/o Movement and that instances of police brutality largely spurred this wing of the social justice movement. Moreover, the curated collection of materials demonstrates that police brutality united the plains’ Mexican population across political ideology, a largely overlooked aspect within the study of Mexican American civil rights movements. Such a finding can be of use today since contemporary Latina/o social justice organizations generally ignore policing issues even amid a rise in national awareness regarding police abuse. In making history accessible to Mexican origin and Latina/o communities, these same communities may in-turn use the knowledge gained from historical research towards the betterment of their social positions—the foundational goal of Chicana/o history and the related field of Chicana/o Studies. Ultimately, this digital history project is intended to draw visitors to further explore the Chicana/o Civil Rights Movement within and beyond the plains.Keywords: Chicana/o Movement, digital history, police brutality, newspapers, protests, student activism
Procedia PDF Downloads 122