Search results for: data bank
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25013

Search results for: data bank

24443 The Effectiveness of Rebranding as a Comparative Study of Ghanaian Business Using the Principles of Corporate Rebranding

Authors: Kennedy Gbenu, Richmond Kweku Frempong

Abstract:

Rebranding has become a very important strategic tool for companies wanting to succeed in the ever competitive business world using the principles of rebranding Moisescu. Two businesses in Ghana (Ghana Commercial Bank and Vodafone Ghana) have been used to ascertain how rebranding of these organizations was done using the principles in their effort to rebrand themselves and to stay relevant. A secondary research mainly on literature surrounding rebranding, official websites of the organizations under study have also been used extensively. After a basic comparative study undertaken two firms (GCB and VODAFONE) seems to be using the first three principles and reaping from it as provided by Moisescu. This goes to show that rebranding should not be done in vacuum but should be guided by such principles so as to achieve the full potential of any kind of investments made.

Keywords: brands, corporate branding, innovation, case studies

Procedia PDF Downloads 388
24442 Multidimensional Poverty: A Comparative Study for Vulnerability of Women in Lebanon

Authors: Elif N. Coban

Abstract:

With the political instability that has prevailed in Lebanon since October 2019, followed by a global pandemic and a deepening concurrent economic crisis after the Beirut Port explosion on August 4, 2020, Syrian refugees in Lebanon have struggled to survive what the World Bank has described as one of the worst economic crises in decades. This study aims to assess the vulnerability of Syrian refugee women. It will present a comparative analysis of refugee and Lebanese households using data from Lebanon’s Labour Force and Household Conditions Survey (LFHLCS) and from VASyr surveys, which are comprehensive annual surveys conducted jointly by the United Nations High Commissioner for Refugees (UNHCR), the United Nations Children’s Fund (UNICEF), and the United Nations World Food Programme (WFP). The study adopts an intersectionality-based method, which deals with gender and marginalized communities from many different perspectives, to put forward a gender-oriented approach. Examining the distribution of socioeconomic status among Syrian and Lebanese households might help to understand the disproportionate burdens borne by women. In this context, multidimensional poverty (MP) helps depict fragile communities’ socioeconomic status and allows a fuller grasp the multiple aspects of deprivation. Finally, this understanding may pave the way to more inclusive policy for decision-makers and practitioners working on refugee issues.

Keywords: multidimensional poverty, gender studies, intersectionality, Syrian refugees, Lebanon

Procedia PDF Downloads 104
24441 Thermal and Hydraulic Design of Shell and Tube Heat Exchangers

Authors: Ahmed R. Ballil

Abstract:

Heat exchangers are devices used to transfer heat between two fluids. These devices are utilized in many engineering and industrial applications such as heating, cooling, condensation and boiling processes. The fluids might be in direct contact (mixed), or they separated by a solid wall to avoid mixing. In the present paper, interactive computer-aided design of shell and tube heat exchangers is developed using Visual Basic computer code as a framework. This design is based on the Bell-Delaware method, which is one of the very well known methods reported in the literature for the design of shell and tube heat exchangers. Physical properties for either the tube or the shell side fluids are internally evaluated by calling on an enormous data bank composed of more than a hundred fluid compounds. This contributes to increase the accuracy of the present design. The international system of units is considered in the developed computer program. The present design has an added feature of being capable of performing modification based upon a preset design criterion, such that an optimum design is obtained at satisfying constraints set either by the user or by the method itself. Also, the present code is capable of giving an estimate of the approximate cost of the heat exchanger based on the predicted surface area of the exchanger evaluated by the program. Finally, the present thermal and hydraulic design code is tested for accuracy and consistency against some of existed and approved designs of shell and tube heat exchangers.

Keywords: bell-delaware method, heat exchangers, shell and tube, thermal and hydraulic design

Procedia PDF Downloads 140
24440 Data Mining Approach for Commercial Data Classification and Migration in Hybrid Storage Systems

Authors: Mais Haj Qasem, Maen M. Al Assaf, Ali Rodan

Abstract:

Parallel hybrid storage systems consist of a hierarchy of different storage devices that vary in terms of data reading speed performance. As we ascend in the hierarchy, data reading speed becomes faster. Thus, migrating the application’ important data that will be accessed in the near future to the uppermost level will reduce the application I/O waiting time; hence, reducing its execution elapsed time. In this research, we implement trace-driven two-levels parallel hybrid storage system prototype that consists of HDDs and SSDs. The prototype uses data mining techniques to classify application’ data in order to determine its near future data accesses in parallel with the its on-demand request. The important data (i.e. the data that the application will access in the near future) are continuously migrated to the uppermost level of the hierarchy. Our simulation results show that our data migration approach integrated with data mining techniques reduces the application execution elapsed time when using variety of traces in at least to 22%.

Keywords: hybrid storage system, data mining, recurrent neural network, support vector machine

Procedia PDF Downloads 299
24439 Discussion on Big Data and One of Its Early Training Application

Authors: Fulya Gokalp Yavuz, Mark Daniel Ward

Abstract:

This study focuses on a contemporary and inevitable topic of Data Science and its exemplary application for early career building: Big Data and Leaving Learning Community (LLC). ‘Academia’ and ‘Industry’ have a common sense on the importance of Big Data. However, both of them are in a threat of missing the training on this interdisciplinary area. Some traditional teaching doctrines are far away being effective on Data Science. Practitioners needs some intuition and real-life examples how to apply new methods to data in size of terabytes. We simply explain the scope of Data Science training and exemplified its early stage application with LLC, which is a National Science Foundation (NSF) founded project under the supervision of Prof. Ward since 2014. Essentially, we aim to give some intuition for professors, researchers and practitioners to combine data science tools for comprehensive real-life examples with the guides of mentees’ feedback. As a result of discussing mentoring methods and computational challenges of Big Data, we intend to underline its potential with some more realization.

Keywords: Big Data, computation, mentoring, training

Procedia PDF Downloads 350
24438 Towards a Secure Storage in Cloud Computing

Authors: Mohamed Elkholy, Ahmed Elfatatry

Abstract:

Cloud computing has emerged as a flexible computing paradigm that reshaped the Information Technology map. However, cloud computing brought about a number of security challenges as a result of the physical distribution of computational resources and the limited control that users have over the physical storage. This situation raises many security challenges for data integrity and confidentiality as well as authentication and access control. This work proposes a security mechanism for data integrity that allows a data owner to be aware of any modification that takes place to his data. The data integrity mechanism is integrated with an extended Kerberos authentication that ensures authorized access control. The proposed mechanism protects data confidentiality even if data are stored on an untrusted storage. The proposed mechanism has been evaluated against different types of attacks and proved its efficiency to protect cloud data storage from different malicious attacks.

Keywords: access control, data integrity, data confidentiality, Kerberos authentication, cloud security

Procedia PDF Downloads 327
24437 Burden of Dengue in Northern India

Authors: Ashutosh Biswas, Poonam Coushic, Kalpana Baruah, Paras Singla, A. C. Dhariwal, Pawana Murthy

Abstract:

Burden of Dengue in Northern India Ashutosh Biswas, Poonam Coushic, Kalpana Baruah, Paras Singla, AC Dhariwal, Pawana Murthy. All India Institute of Medical Sciences, NVBDCP,WHO New Delhi, India Aim: This study was conducted to estimate the burden of dengue in capital region of India. Methodology:Seropositivity of Dengue for IgM Ab, NS1 Ag and IgG Ab were performed among the blood donors’ samples from blood bank, those who were coming to donate blood for the requirement of blood for the admitted patients in hospital. Blood samplles were collected through out the year to estimate seroprevalance of dengue with or without outbreak season. All the subjects were asymptomatic at the time of blood donation. Results: A total of 1558 donors were screened for the study. On the basis of inclusion/ exclusion criteria, we enrolled 1531subjects for the study.Twenty seven donors were excluded from the study, out of which 6 were detected HIV +ve, 11 were positive for HBsAg and 10 were found positive for HCV.Mean age was 30.51 ± 7.75 years.Of 1531subjects, 18 (1.18%) had a past history of typhoid fever, 28 (1.83%) had chikungunya fever, 9 (0.59%) had malaria and 43 subjects (2.81%) had a past history of symptomatic dengue infection.About 2.22% (34) of subjects were found to have sero-positive for NS1 Ag with a peak point prevalence of 7.14% in the month of October and sero-positive of IgM Ab was observed about 5.49% (84)with a peak point prevalence of 14.29% in the month of October. Sero-prevalnce of IgGwas detected in about 64.21% (983) of subjects. Conclusion: Acute asymptomatic dengue (NS1 Ag+ve) was observed in 7.14%, as the subjects were having no symptoms at the time of sampling. This group of subjects poses a potential public health threat for transmitting dengue infection through blood transfusion (TTI) in the community as evident by presence of active viral infection due to NS1Ag +VE. Therefore a policy may be implemented in the blood bank for testing NS1 Ag to look for active dengue infection for preventing dengue transmission through blood transfusion (TTI). Acute or Subacute dengue infection ( IgM Ab+ve) was observed from 5.49% to 14.29% which is a peak point prevalence in the month of October. About 64.21% of the population were immunized by natural dengue infection ( IgG Ab+ve) in theNorthern province of India. This might be helpful for implementing the dengue vaccine in a region. Blood samples in blood banks should be tested for dengue before transfusion to any other person to prevent transfusion transmitted dengue infection as we estimated upto 7.14% positivity of NS1 Ag in our study which indicates presence of dengue virus in blood donors’ samples.

Keywords: Dengue Burden, Seroprevalance, Asymptomatic dengue, Dengue transmission through blood transfusion

Procedia PDF Downloads 135
24436 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah

Abstract:

At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.

Keywords: Semantic Web, linked open data, database, statistic

Procedia PDF Downloads 170
24435 The Role of Data Protection Officer in Managing Individual Data: Issues and Challenges

Authors: Nazura Abdul Manap, Siti Nur Farah Atiqah Salleh

Abstract:

For decades, the misuse of personal data has been a critical issue. Malaysia has accepted responsibility by implementing the Malaysian Personal Data Protection Act 2010 to secure personal data (PDPA 2010). After more than a decade, this legislation is set to be revised by the current PDPA 2023 Amendment Bill to align with the world's key personal data protection regulations, such as the European Union General Data Protection Regulations (GDPR). Among the other suggested adjustments is the Data User's appointment of a Data Protection Officer (DPO) to ensure the commercial entity's compliance with the PDPA 2010 criteria. The change is expected to be enacted in parliament fairly soon; nevertheless, based on the experience of the Personal Data Protection Department (PDPD) in implementing the Act, it is projected that there will be a slew of additional concerns associated with the DPO mandate. Consequently, the goal of this article is to highlight the issues that the DPO will encounter and how the Personal Data Protection Department should respond to this subject. The study result was produced using a qualitative technique based on an examination of the current literature. This research reveals that there are probable obstacles experienced by the DPO, and thus, there should be a definite, clear guideline in place to aid DPO in executing their tasks. It is argued that appointing a DPO is a wise measure in ensuring that the legal data security requirements are met.

Keywords: guideline, law, data protection officer, personal data

Procedia PDF Downloads 69
24434 An Active Solar Energy System to Supply Heating Demands of the Teaching Staff Dormitory of Islamic Azad University Ramhormoz Branch

Authors: M. Talebzadegan, S. Bina, I. Riazi

Abstract:

The purpose of this paper is to present an active solar energy system to supply heating demands of the teaching staff dormitory of the Islamic Azad University of Ramhormoz. The design takes into account the solar radiations and climate data of Ramhormoz town and is based on the daily warm water consumption for health demands of 450 residents of the dormitory, which is equal to 27000 lit of 50-C° water, and building heating requirements with an area of 3500 m² well-protected by heatproof materials. First, heating demands of the building were calculated, then a hybrid system made up of solar and fossil energies was developed and finally, the design was economically evaluated. Since there is only roof space for using 110 flat solar water heaters, the calculations were made to hybridize solar water heating system with heat pumping system in which solar energy contributes 67% of the heat generated. According to calculations, the net present value “N.P.V.” of revenue stream exceeds “N.P.V.” of cash paid off in this project over three years, which makes economically quite promising. The return of investment and payback period of the project is 4 years. Also, the internal rate of return (IRR) of the project was 25%, which exceeds bank rate of interest in Iran and emphasizes the desirability of the project.

Keywords: Solar energy, Heat Demand, Renewable , Pollution

Procedia PDF Downloads 240
24433 Impact of Harmonic Resonance and V-THD in Sohar Industrial Port–C Substation

Authors: R. S. Al Abri, M. H. Albadi, M. H. Al Abri, U. K. Al Rasbi, M. H. Al Hasni, S. M. Al Shidi

Abstract:

This paper presents an analysis study on the impacts of the changes of the capacitor banks, the loss of a transformer, and the installation of distributed generation on the voltage total harmonic distortion and harmonic resonance. The study is applied in a real system in Oman, Sohar Industrial Port–C Substation Network. Frequency scan method and Fourier series analysis method are used with the help of EDSA software. Moreover, the results are compared with limits specified by national Oman distribution code.

Keywords: power quality, capacitor bank, voltage total harmonics distortion, harmonic resonance, frequency scan

Procedia PDF Downloads 608
24432 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies

Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala

Abstract:

The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.

Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies

Procedia PDF Downloads 99
24431 Position of the Constitutional Court of the Russian Federation on the Matter of Restricting Constitutional Rights of Citizens Concerning Banking Secrecy

Authors: A. V. Shashkova

Abstract:

The aim of the present article is to analyze the position of the Constitutional Court of the Russian Federation on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The methodological ground of the present Article represents the dialectic scientific method of the socio-political, legal and organizational processes with the principles of development, integrity, and consistency, etc. The consistency analysis method is used while researching the object of the analysis. Some public-private research methods are also used: the formally-logical method or the comparative legal method, are used to compare the understanding of the ‘secrecy’ concept. The aim of the present article is to find the root of the problem and to give recommendations for the solution of the problem. The result of the present research is the author’s conclusion on the necessity of the political will to improve Russian legislation with the aim of compliance with the provisions of the Constitution. It is also necessary to establish a clear balance between the constitutional rights of the individual and the limit of these rights when carrying out various control activities by public authorities. Attempts by the banks to "overdo" an anti-money laundering law under threat of severe sanctions by the regulators actually led to failures in the execution of normal economic activity. Therefore, individuals face huge problems with payments on the basis of clearing, in addition to problems with cash withdrawals. The Bank of Russia sets requirements for banks to execute Federal Law No. 115-FZ too high. It is high place to attract political will here. As well, recent changes in Russian legislation, e.g. allowing banks to refuse opening of accounts unilaterally, simplified banking activities in the country. The article focuses on different theoretical approaches towards the concept of “secrecy”. The author gives an overview of the practices of Spain, Switzerland and the United States of America on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The Constitutional Court of the Russian Federation basing on the Constitution of the Russian Federation has its special understanding of the issue, which should be supported by further legislative development in the Russian Federation.

Keywords: constitutional court, restriction of constitutional rights, bank secrecy, control measures, money laundering, financial control, banking information

Procedia PDF Downloads 175
24430 Federated Learning in Healthcare

Authors: Ananya Gangavarapu

Abstract:

Convolutional Neural Networks (CNN) based models are providing diagnostic capabilities on par with the medical specialists in many specialty areas. However, collecting the medical data for training purposes is very challenging because of the increased regulations around data collections and privacy concerns around personal health data. The gathering of the data becomes even more difficult if the capture devices are edge-based mobile devices (like smartphones) with feeble wireless connectivity in rural/remote areas. In this paper, I would like to highlight Federated Learning approach to mitigate data privacy and security issues.

Keywords: deep learning in healthcare, data privacy, federated learning, training in distributed environment

Procedia PDF Downloads 129
24429 The Utilization of Big Data in Knowledge Management Creation

Authors: Daniel Brian Thompson, Subarmaniam Kannan

Abstract:

The huge weightage of knowledge in this world and within the repository of organizations has already reached immense capacity and is constantly increasing as time goes by. To accommodate these constraints, Big Data implementation and algorithms are utilized to obtain new or enhanced knowledge for decision-making. With the transition from data to knowledge provides the transformational changes which will provide tangible benefits to the individual implementing these practices. Today, various organization would derive knowledge from observations and intuitions where this information or data will be translated into best practices for knowledge acquisition, generation and sharing. Through the widespread usage of Big Data, the main intention is to provide information that has been cleaned and analyzed to nurture tangible insights for an organization to apply to their knowledge-creation practices based on facts and figures. The translation of data into knowledge will generate value for an organization to make decisive decisions to proceed with the transition of best practices. Without a strong foundation of knowledge and Big Data, businesses are not able to grow and be enhanced within the competitive environment.

Keywords: big data, knowledge management, data driven, knowledge creation

Procedia PDF Downloads 102
24428 Survey on Data Security Issues Through Cloud Computing Amongst Sme’s in Nairobi County, Kenya

Authors: Masese Chuma Benard, Martin Onsiro Ronald

Abstract:

Businesses have been using cloud computing more frequently recently because they wish to take advantage of its advantages. However, employing cloud computing also introduces new security concerns, particularly with regard to data security, potential risks and weaknesses that could be exploited by attackers, and various tactics and strategies that could be used to lessen these risks. This study examines data security issues on cloud computing amongst sme’s in Nairobi county, Kenya. The study used the sample size of 48, the research approach was mixed methods, The findings show that data owner has no control over the cloud merchant's data management procedures, there is no way to ensure that data is handled legally. This implies that you will lose control over the data stored in the cloud. Data and information stored in the cloud may face a range of availability issues due to internet outages; this can represent a significant risk to data kept in shared clouds. Integrity, availability, and secrecy are all mentioned.

Keywords: data security, cloud computing, information, information security, small and medium-sized firms (SMEs)

Procedia PDF Downloads 75
24427 Cloud Design for Storing Large Amount of Data

Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás

Abstract:

Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.

Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization

Procedia PDF Downloads 347
24426 Tertiary Education Trust Fund Intervention Projects and Resource Utilization in Universities in South Western States, Nigeria

Authors: Oluwlola Felicia Kikelomo

Abstract:

This study examined the influence of Tertiary Education Trust Fund (TETF) intervention projects and resource utilization in universities in South Western State of Nigeria. The study was a descriptive design of the correlation type. Purposive sampling technique was used to select six out of 14 beneficiary universities in the States. Instruments used to collect data were TETF Intervention Projects Checklist (TETFIPC), Educational Facilities Checklists (EFC) and Resources Utilization Checklists (RUC). The research questions raised were answered using percentage and utilization rates, while Pearson product-moment correlation statistic was used to test the hypotheses formulated to guide the study 0.05 level of significance. Findings of the study indicated that building construction had the highest TETF allocation (64.5%), while staff development opportunities had the least (1.1%) in the sampled universities. Significant and positive relationship existed between time and space utilization rates and student academic performance in the universities (r (1,800) = 0.63 and r (1,800) = 0.59, p ≤ 0.05 respectively). Based, on these findings, it was recommended that there should be periodic evaluation of completed TETF projects and utilization to ensure that TETF funds are properly used for the approved projects; and that TETF should improve on the provision of educational facilities to universities for staff and students’ use through increase in education tax from 2% to 4% with collaboration with the world bank and other funding agencies as being practiced in other countries of the world such as Norway, Spain, and United Kingdom.

Keywords: tertiary education trust fund, intervention, education, human development

Procedia PDF Downloads 369
24425 Estimation of Missing Values in Aggregate Level Spatial Data

Authors: Amitha Puranik, V. S. Binu, Seena Biju

Abstract:

Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.

Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis

Procedia PDF Downloads 369
24424 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL

Procedia PDF Downloads 152
24423 Immunization-Data-Quality in Public Health Facilities in the Pastoralist Communities: A Comparative Study Evidence from Afar and Somali Regional States, Ethiopia

Authors: Melaku Tsehay

Abstract:

The Consortium of Christian Relief and Development Associations (CCRDA), and the CORE Group Polio Partners (CGPP) Secretariat have been working with Global Alliance for Vac-cines and Immunization (GAVI) to improve the immunization data quality in Afar and Somali Regional States. The main aim of this study was to compare the quality of immunization data before and after the above interventions in health facilities in the pastoralist communities in Ethiopia. To this end, a comparative-cross-sectional study was conducted on 51 health facilities. The baseline data was collected in May 2019, while the end line data in August 2021. The WHO data quality self-assessment tool (DQS) was used to collect data. A significant improvment was seen in the accuracy of the pentavalent vaccine (PT)1 (p = 0.012) data at the health posts (HP), while PT3 (p = 0.010), and Measles (p = 0.020) at the health centers (HC). Besides, a highly sig-nificant improvment was observed in the accuracy of tetanus toxoid (TT)2 data at HP (p < 0.001). The level of over- or under-reporting was found to be < 8%, at the HP, and < 10% at the HC for PT3. The data completeness was also increased from 72.09% to 88.89% at the HC. Nearly 74% of the health facilities timely reported their respective immunization data, which is much better than the baseline (7.1%) (p < 0.001). These findings may provide some hints for the policies and pro-grams targetting on improving immunization data qaulity in the pastoralist communities.

Keywords: data quality, immunization, verification factor, pastoralist region

Procedia PDF Downloads 94
24422 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)

Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili

Abstract:

The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).

Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals

Procedia PDF Downloads 169
24421 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 136
24420 Identifying Critical Success Factors for Data Quality Management through a Delphi Study

Authors: Maria Paula Santos, Ana Lucas

Abstract:

Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.

Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort

Procedia PDF Downloads 207
24419 The Effect of Artificial Intelligence on Banking Development and Progress

Authors: Mina Malak Hanna Saad

Abstract:

New strategies for supplying banking services to the customer have been brought, which include online banking. Banks have begun to recall electronic banking (e-banking) as a manner to replace some conventional department features by means of the usage of the internet as a brand-new distribution channel. A few clients have at least one account at multiple banks and get admission to those debts through online banking. To test their present-day internet worth, customers need to log into each of their debts, get particular statistics, and paint closer to consolidation. Not only is it time-ingesting; however, but it is also a repeatable activity with a certain frequency. To solve this problem, the idea of account aggregation was delivered as a solution. Account consolidation in e-banking as a form of digital banking appears to build stronger dating with clients. An account linking service is usually known as a service that permits customers to manipulate their bank accounts held at exceptional institutions through a common online banking platform that places a high priority on safety and statistics protection. The object affords an outline of the account aggregation approach in e-banking as a distinct carrier in the area of e-banking. The advanced facts generation is becoming a vital thing in the improvement of financial services enterprise, specifically the banking enterprise. It has brought different ways of delivering banking to the purchaser, which includes net Banking. Banks began to study electronic banking (e-banking) as a means to update some of their traditional branch functions and the use of the net as a distribution channel. Some clients have at least multiple accounts throughout banks and get the right of entry to that money owed through the usage of e-banking offerings. To examine the contemporary internet's well-worth position, customers have to log in to each of their money owed, get the information and work on consolidation. This no longer takes sufficient time; however, it is a repetitive interest at a specified frequency. To address this point, an account aggregation idea is brought as an answer. E-banking account aggregation, as one of the e-banking kinds, appeared to construct a more potent dating with clients. Account Aggregation carrier usually refers to a service that allows clients to control their bank bills maintained in one-of-a-kind institutions via a common Internet banking working platform, with an excessive subject to protection and privateness. This paper offers an overview of an e-banking account aggregation technique as a new provider in the e-banking field.

Keywords: compatibility, complexity, mobile banking, observation, risk banking technology, Internet banks, modernization of banks, banks, account aggregation, security, enterprise developmente-banking, enterprise development

Procedia PDF Downloads 0
24418 The Determinants of Financing to Deposit Ratio of Islamic Bank in Malaysia

Authors: Achsania Hendratmi, Puji Sucia Sukmaningrum, Fatin Fadhilah Hasib, Nisful Laila

Abstract:

The research aimed to know the influence of Capital Adequacy Ratio (CAR), Return on Assets (ROA) and Size of the Financing to Deposit Ratio (FDR) Islamic Banks in Malaysia by using eleven Islamic Banks in Indonesia and fifteen Islamic Banks in Malaysia in the period 2012 to 2016 as samples. The research used a quantitative approach method, and the analysis technique used multiple linear regression. Based on the result of t-test (partial), CAR, ROA and size significantly affect of FDR. While the results of f-test (simultaneous) showed that CAR, ROA and Size significant effect on FDR.

Keywords: capital adequacy ratio, financing to deposit ratio, return on assets, size

Procedia PDF Downloads 328
24417 Innovating Translation Pedagogy: Maximizing Teaching Effectiveness by Focusing on Cognitive Study

Authors: Dawn Tsang

Abstract:

This paper aims at synthesizing the difficulties in cognitive processes faced by translation majors in mainland China. The purpose is to develop possible solutions and innovation in terms of translation pedagogy, curriculum reform, and syllabus design. This research will base its analysis on students’ instant feedback and interview after training in translation and interpreting courses, and translation faculty’s teaching experiences. This research will take our translation majors as the starting point, who will be one of the focus groups. At present, our Applied Translation Studies Programme is offering translation courses in the following areas: practical translation and interpreting, translation theories, culture and translation, and internship. It is a four-year translation programme, and our students would start their introductory courses since Semester 1 of Year 1. The medium of instruction of our College is solely in English. In general, our students’ competency in English is strong. Yet in translation and especially interpreting classes, no matter it is students’ first attempt or students who have taken university English courses, students find class practices very challenging, if not mission impossible. Their biggest learning problem seems to be weakening cognitive processes in terms of lack of intercultural competence, incomprehension of English language and foreign cultures, inadequate aptitude and slow reaction, and inapt to utilize one’s vocabulary bank etc. This being so, the research questions include: (1) What specific and common cognitive difficulties are students facing while learning translation and interpreting? (2) How to deal with such difficulties, and what implications can be drawn on curriculum reform and syllabus design in translation? (3) How significant should cognitive study be placed on translation curriculum, i.e., the proportion of cognitive study in translation/interpreting courses and in translation major curriculum? and (4) What can we as translation educators do to maximize teaching and learning effectiveness by incorporating the latest development of cognitive study?. We have collected translation students’ instant feedback and conduct interviews with both students and teaching staff, in order to draw parallels as well as distinguishing from our own current teaching practices at United International College (UIC). We have collected 500 questionnaires for now. The main learning difficulties include: poor vocabulary bank, lack of listening and reading comprehension skills in terms of not fully understanding the subtext, aptitude in translation and interpreting etc. This being so, we propose to reform and revitalize translation curriculum and syllabi to address to these difficulties. The aim is to maximize teaching effectiveness in translation by addressing the above-mentioned questions with a special focus on cognitive difficulties faced by translation majors.

Keywords: cognitive difficulties, teaching and learning effectiveness, translation curriculum reform, translation pedagogy

Procedia PDF Downloads 312
24416 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine

Authors: Djamila Benhaddouche, Abdelkader Benyettou

Abstract:

In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.

Keywords: biomedical data, learning, classifier, algorithms decision tree, knowledge extraction

Procedia PDF Downloads 546
24415 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 136
24414 Effect of Ease of Doing Business to Economic Growth among Selected Countries in Asia

Authors: Teodorica G. Ani

Abstract:

Economic activity requires an encouraging regulatory environment and effective rules that are transparent and accessible to all. The World Bank has been publishing the annual Doing Business reports since 2004 to investigate the scope and manner of regulations that enhance business activity and those that constrain it. A streamlined business environment supporting the development of competitive small and medium enterprises (SMEs) may expand employment opportunities and improve the living conditions of low income households. Asia has emerged as one of the most attractive markets in the world. Economies in East Asia and the Pacific were among the most active in making it easier for local firms to do business. The study aimed to describe the ease of doing business and its effect to economic growth among selected economies in Asia for the year 2014. The study covered 29 economies in East Asia, Southeast Asia, South Asia and Middle Asia. Ease of doing business is measured by the Doing Business indicators (DBI) of the World Bank. The indicators cover ten aspects of the ease of doing business such as starting a business, dealing with construction permits, getting electricity, registering property, getting credit, protecting investors, paying taxes, trading across borders, enforcing contracts and resolving insolvency. In the study, Gross Domestic Product (GDP) was used as the proxy variable for economic growth. Descriptive research was the research design used. Graphical analysis was used to describe the income and doing business among selected economies. In addition, multiple regression was used to determine the effect of doing business to economic growth. The study presented the income among selected economies. The graph showed China has the highest income while Maldives produces the lowest and that observation were supported by gathered literatures. The study also presented the status of the ten indicators of doing business among selected economies. The graphs showed varying trends on how easy to start a business, deal with construction permits and to register property. Starting a business is easiest in Singapore followed by Hong Kong. The study found out that the variations in ease of doing business is explained by starting a business, dealing with construction permits and registering property. Moreover, an explanation of the regression result implies that a day increase in the average number of days it takes to complete a procedure will decrease the value of GDP in general. The research proposed inputs to policy which may increase the awareness of local government units of different economies on the simplification of the policies of the different components used in measuring doing business.

Keywords: doing business, economic growth, gross domestic product, Asia

Procedia PDF Downloads 369