Search results for: missing data imputation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24790

Search results for: missing data imputation

24670 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach

Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika

Abstract:

Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.

Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments

Procedia PDF Downloads 208
24669 The Use of Software and Internet Search Engines to Develop the Encoding and Decoding Skills of a Dyslexic Learner: A Case Study

Authors: Rabih Joseph Nabhan

Abstract:

This case study explores the impact of two major computer software programs Learn to Speak English and Learn English Spelling and Pronunciation, and some Internet search engines such as Google on mending the decoding and spelling deficiency of Simon X, a dyslexic student. The improvement in decoding and spelling may result in better reading comprehension and composition writing. Some computer programs and Internet materials can help regain the missing awareness and consequently restore his self-confidence and self-esteem. In addition, this study provides a systematic plan comprising a set of activities (four computer programs and Internet materials) which address the problem from the lowest to the highest levels of phoneme and phonological awareness. Four methods of data collection (accounts, observations, published tests, and interviews) create the triangulation to validly and reliably collect data before the plan, during the plan, and after the plan. The data collected are analyzed quantitatively and qualitatively. Sometimes the analysis is either quantitative or qualitative, and some other times a combination of both. Tables and figures are utilized to provide a clear and uncomplicated illustration of some data. The improvement in the decoding, spelling, reading comprehension, and composition writing skills that occurred is proved through the use of authentic materials performed by the student under study. Such materials are a comparison between two sample passages written by the learner before and after the plan, a genuine computer chat conversation, and the scores of the academic year that followed the execution of the plan. Based on these results, the researcher recommends further studies on other Lebanese dyslexic learners using the computer to mend their language problem in order to design and make a most reliable software program that can address this disability more efficiently and successfully.

Keywords: analysis, awareness, dyslexic, software

Procedia PDF Downloads 210
24668 Exploring the Link between Hoarding Disorder and Trauma: A Scoping Review

Authors: Murray Anderson, Galina Freed, Karli Jahn

Abstract:

Trauma is increasingly recognized as an important construct that has health implications for those who struggle with various mental health issues. For those individuals who meet the criteria for a diagnosis of hoarding disorder (HD), many have experienced some form of trauma. Further, some of the therapeutic interventions for those with HD can further perpetuate or magnify the experience of trauma. Therefore, the aim of this scoping review is to identify and document the nature and extent of research evidence related to trauma as it connects with HD. This review was guided by the questions, ‘How can our understanding of the trauma cycle help us to better appreciate the experiences of individuals who hoard, and how will a trauma informed lens inform the interventions for hoarding disorder? A comprehensive literature search was performed to identify original studies that contained the words “hoarding” and “trauma.” PsychINFO”,''EBSCO host,” “CINAHL” and “PubMed” were searched between January 2005 and April 2021. Articles were screened by three reviewers. Data extracted included publication date, demographics, study design, type of analysis, and noted connections between hoarding and trauma. Of the 329 articles, all duplicates, articles on hoardings of animals, articles not in English, and those without full-text availability were removed. Five categories were found in the remaining 45 articles, including (a) traumatic and stressful life events; (b) the link between posttraumatic stress disorder, trauma, and hoarding; (c) the relationships between different comorbidities, trauma, and hoarding; (d) the lack of early emotional expression and other forms of parental deprivation; and (e) the role of attachment. Lastly, the literature explains how the links between hoarding and trauma are difficult to study due to the highly stigmatized identities with this population. The review provided strong support for the connections between the experience of trauma and HD. What is missing from the literature is the use of a trauma-informed lens to better account for the ways in which hoarding disorder is understood. Other missing pieces in the literature are the potential uses of a trauma-informed lens to enhance the therapeutic process, to understand and reduce treatment attrition, and to improve treatment outcomes. The application of a trauma informed lens could improve our understanding of effective interactions among clients, families, and communities and improve the education around hoarding-related matters. Exploring the connections between trauma and HD can improve therapeutic delivery and destigmatize the experience of dealing with clutter and hoarding concerns. This awareness can also provide health care professionals with both the language and skills to liberate them from a reductionist view on HD.

Keywords: hoarding, attachment, parental deprivation, trauma

Procedia PDF Downloads 114
24667 Modified Clusterwise Regression for Pavement Management

Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella

Abstract:

Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.

Keywords: clusterwise regression, pavement management system, performance model, optimization

Procedia PDF Downloads 238
24666 Development of a Multi-User Country Specific Food Composition Table for Malawi

Authors: Averalda van Graan, Joelaine Chetty, Malory Links, Agness Mwangwela, Sitilitha Masangwi, Dalitso Chimwala, Shiban Ghosh, Elizabeth Marino-Costello

Abstract:

Food composition data is becoming increasingly important as dealing with food insecurity and malnutrition in its persistent form of under-nutrition is now coupled with increasing over-nutrition and its related ailments in the developing world, of which Malawi is not spared. In the absence of a food composition database (FCDB) inherent to our dietary patterns, efforts were made to develop a country-specific FCDB for nutrition practice, research, and programming. The main objective was to develop a multi-user, country-specific food composition database, and table from existing published and unpublished scientific literature. A multi-phased approach guided by the project framework was employed. Phase 1 comprised a scoping mission to assess the nutrition landscape for compilation activities. Phase 2 involved training of a compiler and data collection from various sources, primarily; institutional libraries, online databases, and food industry nutrient data. Phase 3 subsumed evaluation and compilation of data using FAO and IN FOODS standards and guidelines. Phase 4 concluded the process with quality assurance. 316 Malawian food items categorized into eight food groups for 42 components were captured. The majority were from the baby food group (27%), followed by a staple (22%) and animal (22%) food group. Fats and oils consisted the least number of food items (2%), followed by fruits (6%). Proximate values are well represented; however, the percent missing data is huge for some components, including Se 68%, I 75%, Vitamin A 42%, and lipid profile; saturated fat 53%, mono-saturated fat 59%, poly-saturated fat 59% and cholesterol 56%. A multi-phased approach following the project framework led to the development of the first Malawian FCDB and table. The table reflects inherent Malawian dietary patterns and nutritional concerns. The FCDB can be used by various professionals in nutrition and health. Rising over-nutrition, NCD, and changing diets challenge us for nutrient profiles of processed foods and complete lipid profiles.

Keywords: analytical data, dietary pattern, food composition data, multi-phased approach

Procedia PDF Downloads 77
24665 Divergence of Innovation Capabilities within the EU

Authors: Vishal Jaunky, Jonas Grafström

Abstract:

The development of the European Union’s (EU) single economic market and rapid technological change has resulted in major structural changes in EU’s member states economies. The general liberalization process that the countries has undergone together has convinced the governments of the member states of need to upgrade their economic and training systems in order to be able to face the economic globalization. Several signs of economic convergence have been found but less is known about the knowledge production. This paper addresses the convergence pattern of technological innovation in 13 European Union (EU) states over the time period 1990-2011 by means of parametric and non-parametric techniques. Parametric approaches revolve around the neoclassical convergence theories. This paper reveals divergence of both the β and σ types. Further, we found evidence of stochastic divergence and non-parametric convergence approach such as distribution dynamics shows a tendency towards divergence. This result is supported with the occurrence of γ-divergence. The policies of the EU to reduce technological gap among its member states seem to be missing its target, something that can have negative long run consequences for the market.

Keywords: convergence, patents, panel data, European union

Procedia PDF Downloads 275
24664 Study of Evapotranspiration for Pune District

Authors: Ranjeet Sable, Mahotsavi Patil, Aadesh Nimbalkar, Prajakta Palaskar, Ritu Sagar

Abstract:

The exact amount of water used by various crops in different climatic conditions is necessary to step for design, planning, and management of irrigation schemes, water resources, scheduling of irrigation systems. Evaporation and transpiration are combinable called as evapotranspiration. Water loss from trees during photosynthesis is called as transpiration and when water gets converted into gaseous state is called evaporation. For calculation of correct evapotranspiration, we have to choose the method in such way that is should be suitable and require minimum climatic data also it should be applicable for wide range of climatic conditions. In hydrology, there are multiple correlations and regression is generally used to develop relationships between three or more hydrological variables by knowing the dependence between them. This research work includes the study of various methods for calculation of evapotranspiration and selects reasonable and suitable one Pune region (Maharashtra state). As field methods are very costly, time-consuming and not give appropriate results if the suitable climate is not maintained. Observation recorded at Pune metrological stations are used to calculate evapotranspiration with the help of Radiation Method (RAD), Modified Penman Method (MPM), Thornthwaite Method (THW), Blaney-Criddle (BCL), Christiansen Equation (CNM), Hargreaves Method (HGM), from which Hargreaves and Thornthwaite are temperature based methods. Performance of all these methods are compared with Modified Penman method and method which showing less variation with standard Modified Penman method (MPM) is selected as the suitable one. Evapotranspiration values are estimated on a monthly basis. Comparative analysis in this research used for selection for raw data-dependent methods in case of missing data.

Keywords: Blaney-Criddle, Christiansen equation evapotranspiration, Hargreaves method, precipitations, Penman method, water use efficiency

Procedia PDF Downloads 259
24663 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis

Authors: Praniil Nagaraj

Abstract:

This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. The 2022 Annual Homeless Assessment Report (AHAR) to Congress highlighted alarming statistics, emphasizing the need for effective decision-making and budget allocation within local planning bodies known as Continuums of Care (CoC). This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.

Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis

Procedia PDF Downloads 47
24662 Applications of Big Data in Education

Authors: Faisal Kalota

Abstract:

Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.

Keywords: big data, learning analytics, analytics, big data in education, Hadoop

Procedia PDF Downloads 397
24661 The Identification of Environmentally Friendly People: A Case of South Sumatera Province, Indonesia

Authors: Marpaleni

Abstract:

The intergovernmental Panel on Climate Change (IPCC) declared in 2007 that global warming and climate change are not just a series of events caused by nature, but rather caused by human behaviour. Thus, to reduce the impact of human activities on climate change it is required to have information about how people respond to the environmental issues and what constraints they face. However, information on these and other phenomena remains largely missing, or not fully integrated within the existing data systems. The proposed study is aimed at filling the gap in this knowledge by focusing on Environmentally Friendly Behaviour (EFB) of the people of Indonesia, by taking the province of South Sumatera as a case of study. EFB is defined as any activity in which people engage to improve the conditions of the natural resources and/or to diminish the impact of their behaviour on the environment. This activity is measured in terms of consumption in five areas at the household level, namely housing, energy, water usage, recycling and transportation. By adopting the Indonesia’s Environmentally Friendly Behaviour conducted by Statistics Indonesia in 2013, this study aims to precisely identify one’s orientation towards EFB based on socio demographic characteristics such as: age, income, occupation, location, education, gender and family size. The results of this research will be useful to precisely identify what support people require to strengthen their EFB, to help identify specific constraints that different actors and groups face and to uncover a more holistic understanding of EFB in relation to particular demographic and socio-economics contexts. As the empirical data are examined from the national data sample framework, which will continue to be collected, it can be used to forecast and monitor the future of EFB.

Keywords: environmentally friendly behavior, demographic, South Sumatera, Indonesia

Procedia PDF Downloads 274
24660 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System

Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala

Abstract:

One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.

Keywords: CNN, location identification, tracking, GPS, GSM

Procedia PDF Downloads 145
24659 A Crowdsourced Homeless Data Collection System And Its Econometric Analysis: Strengthening Inclusive Public Administration Policies

Authors: Praniil Nagaraj

Abstract:

This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. The 2022 Annual Homeless Assessment Report (AHAR) to Congress highlighted alarming statistics, emphasizing the need for effective decision-making and budget allocation within local planning bodies known as Continuums of Care (CoC). This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.

Keywords: crowdsourcing, homelessness, socio-economic policies, statistical regression

Procedia PDF Downloads 74
24658 A 3-Year Evaluation Study on Fine Needle Aspiration Cytology and Corresponding Histology

Authors: Amjad Al Shammari, Ashraf Ibrahim, Laila Seada

Abstract:

Background and Objectives: Incidence of thyroid carcinoma has been increasing world-wide. In the present study, we evaluated diagnostic accuracy of Fine needle aspiration (FNA) and its efficiency in early detecting neoplastic lesions of thyroid gland over a 3-year period. Methods: Data have been retrieved from pathology files in King Khalid Hospital. For each patient, age, gender, FNA, site & size of nodule and final histopathologic diagnosis were recorded. Results: Study included 490 cases where 419 of them were female and 71 male. Male to female ratio was 1:6. Mean age was 43 years for males and 38 for females. Cases with confirmed histopathology were 131. In 101/131 (77.1%), concordance was found between FNA and histology. In 30/131 (22.9%), there was discrepancy in diagnosis. Total malignant cases were 43, out of which 14 (32.5%) were true positive and 29 (67.44%) were false negative. No false positive cases could be found in our series. Conclusion: FNA could diagnose benign nodules in all cases, however, in malignant cases, ultrasound findings have to be taken into consideration to avoid missing of a microcarcinoma in the contralateral lobe.

Keywords: FNA, hail, histopathology, thyroid

Procedia PDF Downloads 326
24657 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 86
24656 Analysis of Big Data

Authors: Sandeep Sharma, Sarabjit Singh

Abstract:

As per the user demand and growth trends of large free data the storage solutions are now becoming more challenge-able to protect, store and to retrieve data. The days are not so far when the storage companies and organizations are start saying 'no' to store our valuable data or they will start charging a huge amount for its storage and protection. On the other hand as per the environmental conditions it becomes challenge-able to maintain and establish new data warehouses and data centers to protect global warming threats. A challenge of small data is over now, the challenges are big that how to manage the exponential growth of data. In this paper we have analyzed the growth trend of big data and its future implications. We have also focused on the impact of the unstructured data on various concerns and we have also suggested some possible remedies to streamline big data.

Keywords: big data, unstructured data, volume, variety, velocity

Procedia PDF Downloads 534
24655 Contribution Of Community-based House To House (H2h) Active Tuberculosis (Tb) Case Finding (Acf) To Increase In Tb Notification In Nigeria: Kano State Experience 2012 To 2022

Authors: Ibrahim Umar, S Chindo, A Rajab

Abstract:

Background: TB remains a disease of public health concern in Nigeria with an estimated incidence rate of 219/100,000. Kano has the second highest TB burden in Nigeria and is the leading state with the highest consistent yearly TB notification. House-to-house (H2H) active case search in the community was found to have major contribution to the total TB notification in the state. Aims and Objective: To showcase the impact of H2H community active TB case search (ACF) to yearly TB notification in Kano State, Northern Nigeria from 2012 to 2022. Methodology: This is a retrospective descriptive study based on the analysis of data collected during the routine quarterly and yearly TB data collected in the state. Data was analyzed using the Power BI with statistical alpha level of significance <0.05. Results: Between 2012 and 2013 there was no House-to-house active TB case search in Nigeria and Kano had zero contribution to TB notification from the community in those years. However, in 2014 with the introduction of H2H Active TB Case Search Kano notified 6,014 TB cases out of which 113 came from the community ACF that translated to 2% contribution to total TB notification. From 2014 to 2022 there was progressive increase in community contribution to TB case notification from 113 out of 6,014 total TB patients notified (2012) to 11,799 out of 26,371 TB patients notified (2022) in Kano State. This translated to 45% increase in community contribution to total TB case notification. Discussion: Remarkable increase in community contribution to total TB case notification in Kano State was achieved in 2022 with 11,799 TB cases notified from the community Active TB case search to the total of 26,731 TB cases notified in Kano State, Nigeria. Conclusion: in research has shown that Community-based H2H Active TB Case Search through Community TB Workers (CTWs) is an excellent strategy in finding the missing TB cases towards Ending TB in the world.

Keywords: tuberculosis(TB), active case search (ACF), house-to-house (H2H), community TB workers (CTWs)

Procedia PDF Downloads 66
24654 Pleomorphic Dermal Sarcoma: A Management Challenge

Authors: Mona Nada, Fahmy Fahmy

Abstract:

Background: Pleomorphic dermal sarcoma is a rare form of skin cancer affecting cutaneous layer and, in some cases associated with recurrence and metastasis, very commonly to seen in elderly patient affecting the area of head and neck. Pleomorphic dermal sarcoma rises in ultraviolet light exposed areas. The symptoms and severity of this kind of skin cancer varies according to histological factors. The differentiation of Pleomorphic dermal sarcoma needs extensive immunohistochemistry, as the diagnosis depends mainly on exclusion to rule out other malignancy like poorly differentiated squamous cell carcinoma, melanoma, angiosarcoma and leiomyosarcoma. Objective: assessing the management of Pleomorphic dermal sarcoma in our unit and compared to the updated guidelines. Design: Retrospective study Collection of patient data from medical records at countess of Chester plastic surgery unit of the last 5 years, all histologically confirmed Pleomorphic dermal sarcoma (2017-2023). Data were collected confirmed to be Pleomorphic dermal sarcoma were included in the study. The data collected: clinical description of the lesions at first presentation, operation time, multidisciplinary team discussion, plan, referral as well as second operation and investigation done. With comparison of histological examination, immunohistochemistry staining, the excision and rate of recurrence. Results: data collected N19 from (2017-2023) showed the disease predominantly affecting males and the lesion mainly in head and neck, the diagnosis needed extensive immunohistochemistry to differentiate between other malignancy. recurrence present in numbers of the cases which managed after multidisciplinary team discussion either by excision or radiotherapy. Conclusion: Pleomorphic dermal sarcoma is a rare malignancy which needs more understanding and avoid missing as it is aggressive form of skin cancer, there is a chance of metastasis and recurrence which makes it very important to understand the process of development of the cancer and frequent review of the management guidelines.

Keywords: pleomorphic dermal sarcoma, recurrence, radiotherapy, surgical

Procedia PDF Downloads 61
24653 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, WangQun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.

Keywords: data cleaning, dependency rules, violation data discovery, data repair

Procedia PDF Downloads 553
24652 Wind Velocity Mitigation for Conceptual Design: A Spatial Decision (Support Framework)

Authors: Mohamed Khallaf, Hossein M Rizeei

Abstract:

Simulating wind pattern behavior over proposed urban features is critical in the early stage of the conceptual design of both architectural and urban disciplines. However, it is typically not possible for designers to explore the impact of wind flow profiles across new urban developments due to a lack of real data and inaccurate estimation of building parameters. Modeling the details of existing and proposed urban features and testing them against wind flows is the missing part of the conceptual design puzzle where architectural and urban discipline can focus. This research aims to develop a spatial decision-support design method utilizing LiDAR, GIS, and performance-based wind simulation technology to mitigate wind-related hazards on a design by simulating alternative design scenarios at the pedestrian level prior to its implementation in Sydney, Australia. The result of the experiment demonstrates the capability of the proposed framework to improve pedestrian comfort in relation to wind profile.

Keywords: spatial decision-support design, performance-based wind simulation, LiDAR, GIS

Procedia PDF Downloads 109
24651 Application of Fair Value Accounting in an Emerging Market Algerian Case

Authors: Haouam Djemaa

Abstract:

This study aimed to identify the possibility for applying fair value accounting by Algerian enterprises coted in capital maket (Algiers stock exchange). To achieve the objectives of this study, we made an interview with preparers of accounting information. The results document that enterprises are aware of fair value accounting in financial reporting because of its ability to provide useful accounting, but it depends on the availability of favorable circumstances for its application and this is what is missing in the Algerian environment.

Keywords: fair value, financial reporting, accounting information, valuation method

Procedia PDF Downloads 382
24650 Effect of Facilitation in a Problem-Based Environment on the Metacognition, Motivation and Self-Directed Learning in Nursing: A Quasi-Experimental Study among Nurse Students in Tanzania

Authors: Walter M. Millanzi, Stephen M. Kibusi

Abstract:

Background: Currently, there has been a progressive shortage not only to the number but also the quality of medical practitioners for the most of nursing. Despite that, those who are present exhibit unethical and illegal practices, under standard care and malpractices. The concern is raised in the ways they are prepared, or there might be something missing in nursing curricula or how it is delivered. There is a need for transforming or testing new teaching modalities to enhance competent health workforces. Objective: to investigate the Effect of Facilitation in a Problem-based Environment (FPBE) on metacognition, self-directed learning and learning motivation to undergraduate nurse student in Tanzanian higher learning institutions. Methods: quasi-experimental study (quantitative research approach). A purposive sampling technique was employed to select institutions and achieving a sample size of 401 participants (interventional = 134 and control = 267). Self-administered semi-structured questionnaire; was the main data collection methods and the Statistical Package for Service Solution (v. 20) software program was used for data entry, data analysis, and presentations. Results: The pre-post test results between groups indicated noticeably significant change on metacognition in an intervention (M = 1.52, SD = 0.501) against the control (M = 1.40, SD = 0.490), t (399) = 2.398, p < 0.05). SDL in an intervention (M = 1.52, SD = 0.501) against the control (M = 1.40, SD = 0.490), t (399) = 2.398, p < 0.05. Motivation to learn in an intervention (M = 62.67, SD = 14.14) and the control (n = 267, M = 57.75), t (399) = 2.907, p < 0.01). A FPBE teaching pedagogy, was observed to be effective on the metacognition (AOR = 1.603, p < 0.05), SDL (OR = 1.729, p < 0.05) and Intrinsic motivation in learning (AOR = 1.720, p < 0.05) against conventional teaching pedagogy. Needless, was less likely to enhance Extrinsic motivation (AOR = 0.676, p > 0.05) and Amotivation (AOR = 0.538, p > 0.05). Conclusion and recommendation: FPBE teaching pedagogy, can improve student’s metacognition, self-directed learning and intrinsic motivation to learn among nurse students. Nursing curricula developers should incorporate it to produce 21st century competent and qualified nurses.

Keywords: facilitation, metacognition, motivation, self-directed

Procedia PDF Downloads 177
24649 Examining the Skills of Establishing Number and Space Relations of Science Students with the 'Integrative Perception Test'

Authors: Ni̇sa Yeni̇kalayci, Türkan Aybi̇ke Akarca

Abstract:

The ability of correlation the number and space relations, one of the basic scientific process skills, is being used in the transformation of a two-dimensional object into a three-dimensional image or in the expression of symmetry axes of the object. With this research, it is aimed to determine the ability of science students to establish number and space relations. The research was carried out with a total of 90 students studying in the first semester of the Science Education program of a state university located in the Turkey’s Black Sea Region in the fall semester of 2017-2018 academic year. An ‘Integrative Perception Test (IPT)’ was designed by the researchers to collect the data. Within the scope of IPT, the courses and workbooks specific to the field of science were scanned and the ones without symmetrical structure from the visual items belonging to the ‘Physics - Chemistry – Biology’ sub-fields were selected and listed. During the application, it was expected that students would imagine and draw images of the missing half of the visual items that were given incomplete in the first place. The data obtained from the test in which there are 30 images or pictures in total (f Physics = 10, f Chemistry = 10, f Biology = 10) were analyzed descriptively based on the drawings created by the students as ‘complete (2 points), incomplete/wrong (1 point), empty (0 point)’. For the teaching of new concepts in small aged groups, images or pictures showing symmetrical structures and similar applications can also be used.

Keywords: integrative perception, number and space relations, science education, scientific process skills

Procedia PDF Downloads 143
24648 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity

Authors: Hoda A. Abdel Hafez

Abstract:

Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.

Keywords: mining big data, big data, machine learning, telecommunication

Procedia PDF Downloads 389
24647 Mob Justice in Ghana: Implication for Peace

Authors: Ishaq Alhassan Meriga

Abstract:

This study examined the phenomenon of mob violence and its implication for peace in Ghana. The study used the archival study of media reports and content analysis of other secondary data as well as eyewitness accounts. The study examined trends and patterns of vigilante violence within the Ghanaian context. Results showed a considerable increase in the occurrence of mob violence within the last 10 years. Theft and robbery emerged as the most frequently suspected crimes for which victims were attacked, while the LGBT community is not left out. Cases of mob violence were most frequently reported in urban areas. This study has shown that the patterns, scope, nature, and implication of mob justice in Ghana are fairly and comparatively similar to those found in other parts of Africa and the globe. Mob violence is identified as undermining the rule of law and thereby infringing on the fundamental human rights of the victims. It is confirmed to have a cycle of effects that is an impediment to the peace of the country. The study underscores the implications of mob violence in terms of disdaining human life and dignity, revisiting our justice systems and punishment procedures, resourcing, and empowering law enforcers to fight the menace of vigilantism. First, the archival study had a limitation regarding missing data. The majority of the cases used for the study lack information mostly on perpetrators and the steps taken by public authorities and security agencies after reports of a mob attack have been lodged with them. The study recommends for further research to be undertaken on the perpetrators and survivors of mob actions in order to get a holistic understanding of the phenomenon. This will give a more comprehensive view of the issue of mob violence in Ghana. From the findings, it can be concluded that mob justice is a social canker in Ghanaian communities, which has a great impact on the peace of the country.

Keywords: LGBT, mob justice, peace, vigilantism

Procedia PDF Downloads 57
24646 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach

Authors: Theertha Chandroth

Abstract:

This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.

Keywords: XML, JSON, data comparison, integration testing, Python, SQL

Procedia PDF Downloads 123
24645 Open Fields' Dosimetric Verification for a Commercially-Used 3D Treatment Planning System

Authors: Nashaat A. Deiab, Aida Radwan, Mohamed Elnagdy, Mohamed S. Yahiya, Rasha Moustafa

Abstract:

This study is to evaluate and investigate the dosimetric performance of our institution's 3D treatment planning system, Elekta PrecisePLAN, for open 6MV fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields guided by the recommended QA tests prescribed in AAPM TG53, NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Seven different tests were done applied on solid water equivalent phantom along with 2D array dose detection system, the calculated doses using 3D treatment planning system PrecisePLAN, compared with measured doses to make sure that the dose calculations are accurate for open fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields. The QA results showed dosimetric accuracy of the TPS for open fields within the specified tolerance limits. However large square (25cm x 25cm) and rectangular fields (20cm x 5cm) some points were out of tolerance in penumbra region (11.38 % and 10.9 %, respectively). For the test of SSD variation, the large field resulted from SSD 125 cm for 10cm x 10cm filed the results recorded an error of 0.2% at the central axis and 1.01% in penumbra. The results yielded differences within the accepted tolerance level as recommended. Large fields showed variations in penumbra. These differences between dose values predicted by the TPS and the measured values at the same point may result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.

Keywords: quality assurance, dose calculation, 3D treatment planning system, photon beam

Procedia PDF Downloads 500
24644 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 44
24643 Multi-Source Data Fusion for Urban Comprehensive Management

Authors: Bolin Hua

Abstract:

In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.

Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data

Procedia PDF Downloads 376
24642 Reviewing Privacy Preserving Distributed Data Mining

Authors: Sajjad Baghernezhad, Saeideh Baghernezhad

Abstract:

Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.

Keywords: data mining, distributed data mining, privacy protection, privacy preserving

Procedia PDF Downloads 511
24641 The Right to Data Portability and Its Influence on the Development of Digital Services

Authors: Roman Bieda

Abstract:

The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.

Keywords: data portability, digital market, GDPR, personal data

Procedia PDF Downloads 461