Search results for: data driven decision making
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30561

Search results for: data driven decision making

25611 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 74
25610 Communication Design in Newspapers: A Comparative Study of Graphic Resources in Portuguese and Spanish Publications

Authors: Fátima Gonçalves, Joaquim Brigas, Jorge Gonçalves

Abstract:

As a way of managing the increasing volume and complexity of information that circulates in the present time, graphical representations are increasingly used, which add meaning to the information presented in communication media, through an efficient communication design. The visual culture itself, driven by technological evolution, has been redefining the forms of communication, so that contemporary visual communication represents a major impact on society. This article presents the results and respective comparative analysis of four publications in the Iberian press, focusing on the formal aspects of newspapers and the space they dedicate to the various communication elements. Two Portuguese newspapers and two Spanish newspapers were selected for this purpose. The findings indicated that the newspapers show a similarity in the use of graphic solutions, which corroborate a visual trend in communication design. The results also reveal that Spanish newspapers are more meticulous with graphic consistency. This study intended to contribute to improving knowledge of the Iberian generalist press.

Keywords: communication design, graphic resources, Iberian press, visual journalism

Procedia PDF Downloads 274
25609 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment

Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan

Abstract:

With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.

Keywords: data sharing, cross-domain, data exchange, publish-subscribe

Procedia PDF Downloads 128
25608 Georgia Case: Tourism Expenses of International Visitors on the Basis of Growing Attractiveness

Authors: Nino Abesadze, Marine Mindorashvili, Nino Paresashvili

Abstract:

At present actual tourism indicators cannot be calculated in Georgia, making it impossible to perform their quantitative analysis. Therefore, the study conducted by us is highly important from a theoretical as well as practical standpoint. The main purpose of the article is to make complex statistical analysis of tourist expenses of foreign visitors and to calculate statistical attractiveness indices of the tourism potential of Georgia. During the research, the method involving random and proportional selection has been applied. Computer software SPSS was used to compute statistical data for corresponding analysis. Corresponding methodology of tourism statistics was implemented according to international standards. Important information was collected and grouped from major Georgian airports, and a representative population of foreign visitors and a rule of selection of respondents were determined. The results show a trend of growth in tourist numbers and the share of tourists from post-soviet countries are constantly increasing. The level of satisfaction with tourist facilities and quality of service has improved, but still we have a problem of disparity between the service quality and the prices. The design of tourist expenses of foreign visitors is diverse; competitiveness of tourist products of Georgian tourist companies is higher. Attractiveness of popular cities of Georgia has increased by 43%.

Keywords: tourist, expenses, indexes, statistics, analysis

Procedia PDF Downloads 339
25607 Routing Protocol in Ship Dynamic Positioning Based on WSN Clustering Data Fusion System

Authors: Zhou Mo, Dennis Chow

Abstract:

In the dynamic positioning system (DPS) for vessels, the reliable information transmission between each note basically relies on the wireless protocols. From the perspective of cluster-based routing protocols for wireless sensor networks, the data fusion technology based on the sleep scheduling mechanism and remaining energy in network layer is proposed, which applies the sleep scheduling mechanism to the routing protocols, considering the remaining energy of node and location information when selecting cluster-head. The problem of uneven distribution of nodes in each cluster is solved by the Equilibrium. At the same time, Classified Forwarding Mechanism as well as Redelivery Policy strategy is adopted to avoid congestion in the transmission of huge amount of data, reduce the delay in data delivery and enhance the real-time response. In this paper, a simulation test is conducted to improve the routing protocols, which turn out to reduce the energy consumption of nodes and increase the efficiency of data delivery.

Keywords: DPS for vessel, wireless sensor network, data fusion, routing protocols

Procedia PDF Downloads 530
25606 Using Stable Isotopes and Hydrochemical Characteristics to Assess Stream Water Sources and Flow Paths: A Case Study of the Jonkershoek Catchment, South Africa

Authors: Retang A. Mokua, Julia Glenday, Jacobus M. Nel

Abstract:

Understanding hydrological processes in mountain headwater catchments, such as the Jonkershoek Valley, is crucial for improving the predictive capability of hydrologic modeling in the Cape Fold Mountain region of South Africa, incorporating the influence of the Table Mountain Group fractured rock aquifers. Determining the contributions of various possible surface and subsurface flow pathways in such catchments has been a challenge due to the complex nature of the fractured rock geology, low ionic concentrations, high rainfall, and streamflow variability. The study aimed to describe the mechanisms of streamflow generation during two seasons (dry and wet). In this study, stable isotopes of water (18O and 2H), hydrochemical tracer electrical conductivity (EC), hydrometric data were used to assess the spatial and temporal variation in flow pathways and geographic sources of stream water. Stream water, groundwater, two shallow piezometers, and spring samples were routinely sampled at two adjacent headwater sub-catchments and analyzed for isotopic ratios during baseflow conditions between January 2018 and January 2019. From these results, no significance (p > 0.05) in seasonal variations in isotopic ratios were observed, the stream isotope signatures were consistent throughout the study period. However, significant seasonal and spatial variations in the EC were evident (p < 0.05). The findings suggest that, in the dry season, baseflow generation mechanisms driven by groundwater and interflow as discharge from perennial springs in these catchments are the primary contributors. The wet season flows were attributed to interflow and perennial and ephemeral springs. Furthermore, the observed seasonal variations in EC were indicative of a greater proportion of sub-surface water inputs. With these results, a conceptual model of streamflow generation processes for the two seasons was constructed.

Keywords: electrical conductivity, Jonkershoek valley, stable isotopes, table mountain group

Procedia PDF Downloads 113
25605 Unequal Traveling: How School District System and School District Housing Characteristics Shape the Duration of Families Commuting

Authors: Geyang Xia

Abstract:

In many countries, governments have responded to the growing demand for educational resources through school district systems, and there is substantial evidence that school district systems have been effective in promoting inter-district and inter-school equity in educational resources. However, the scarcity of quality educational resources has brought about varying levels of education among different school districts, making it a common choice for many parents to buy a house in the school district where a quality school is located, and they are even willing to bear huge commuting costs for this purpose. Moreover, this is evidenced by the fact that parents of families in school districts with quality education resources have longer average commute lengths and longer average commute distances than parents in average school districts. This "unequal traveling" under the influence of the school district system is more common in school districts at the primary level of education. This further reinforces the differential hierarchy of educational resources and raises issues of inequitable educational public services, education-led residential segregation, and gentrification of school district housing. Against this background, this paper takes Nanjing, a famous educational city in China, as a case study and selects the school districts where the top 10 public elementary schools are located. The study first identifies the spatio-temporal behavioral trajectory dataset of these high-quality school district households by using spatial vector data, decrypted cell phone signaling data, and census data. Then, by constructing a "house-school-work (HSW)" commuting pattern of the population in the school district where the high-quality educational resources are located, and based on the classification of the HSW commuting pattern of the population, school districts with long employment hours were identified. Ultimately, the mechanisms and patterns inherent in this unequal commuting are analyzed in terms of six aspects, including the centrality of school district location, functional diversity, and accessibility. The results reveal that the "unequal commuting" of Nanjing's high-quality school districts under the influence of the school district system occurs mainly in the peripheral areas of the city, and the schools matched with these high-quality school districts are mostly branches of prestigious schools in the built-up areas of the city's core. At the same time, the centrality of school district location and the diversity of functions are the most important influencing factors of unequal commuting in high-quality school districts. Based on the research results, this paper proposes strategies to optimize the spatial layout of high-quality educational resources and corresponding transportation policy measures.

Keywords: school-district system, high quality school district, commuting pattern, unequal traveling

Procedia PDF Downloads 106
25604 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector

Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador

Abstract:

This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.

Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets

Procedia PDF Downloads 404
25603 Analysis of NFC and Biometrics in the Retail Industry

Authors: Ziwei Xu

Abstract:

The increasing emphasis on mobility has driven the application of innovative communication technologies across various industries. In the retail sector, Near Field Communication (NFC) has emerged as a significant and transformative technology, particularly in the payment and retail supermarket sectors. NFC enables new payment methods, such as electronic wallets, and enhances information management in supermarkets, contributing to the growth of the trade. This report presents a comprehensive analysis of NFC technology, focusing on five key aspects. Firstly, it provides an overview of NFC, including its application methods and development history. Additionally, it incorporates Arthur's work on combinatorial evolution to elucidate the emergence and impact of NFC technology, while acknowledging the limitations of the model in analyzing NFC. The report then summarizes the positive influence of NFC on the retail industry along with its associated constraints. Furthermore, it explores the adoption of NFC from both organizational and individual perspectives, employing the Best Predictors of organizational IT adoption and UTAUT2 models, respectively. Finally, the report discusses the potential future replacement of NFC with biometrics technology, highlighting its advantages over NFC and leveraging Arthur's model to investigate its future development prospects.

Keywords: innovation, NFC, industry, biometrics

Procedia PDF Downloads 79
25602 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 144
25601 Passing the Charity Walking Tours as a Poverty Reduction Establishment in Denpasar City, Bali

Authors: I. Wayan Wiwin

Abstract:

Poverty is one of the big problems faced by big cities in the world. Urbanization one cause, many rural communities trying to earn a living to the city in the hope that they can improve the level of economy, but not equipped with adequate skills so that it becomes an urban demographic problem. Denpasar as the capital of the province of Bali one of them, in the city area of Denpasar there are many slum dwellings inhabited by the poor, whereas Bali is known as one of the best tourist destinations in the world. This condition is very inversely proportional to the progress of tourism in Bali. For that it is necessary to attempt to overcome poverty in the city of Denpasar, one with the development of city tours in the form of charity walking tours, where tourists are invited to take a walk to see directly the state of the poor in the city of Denpasar and provide assistance to them in the form of home assistance, educational scholarships, health assistance, as well as skill and business capital assistance. This research is explorative-qualitative, that is exploring the potential of charity walking tour to overcome poverty in Denpasar City, which is written qualitatively. In the end based on potential data and information, then analyzed into a decision whether it is possible to develop. Therefore, this study only requires respondents or informants who are able to provide answers or qualitative information about matters related to the potential development of charity walking tour. Thus, informants in this study are tourism stakeholders, such as Municipal government officials, businessmen, community leaders and tourism actors, who are considered to be providing information relating to the development of urban tourism.

Keywords: tourism, city tours, charity walking tours, poverty

Procedia PDF Downloads 163
25600 Potential Contribution of Blue Oceans for Growth of Universities: Case of Faculties of Agriculture in Public Universities in Zimbabwe

Authors: Wonder Ngezimana, Benjamin Alex Madzivire

Abstract:

As new public universities are being applauded for being promulgated in Zimbabwe, there is need for comprehensive plan for ensuring sustainable competitive advantages in their niche mandated areas. Unhealthy competition between university faculties for enrolment hinders growth of the newly established universities faculties, especially in the agricultural sciences related disciplines. Blue ocean metaphor is based on creation of competitor-free market unlike 'red oceans', which are well explored and crowded with competitors. This study seeks to explore the potential contribution of blue oceans strategy (BOS) for growth of universities with bias towards faculties of agriculture in public universities in Zimbabwe. Case studies with agricultural sciences related disciplines were selected across three universities for interviewing. Data was collected through 10 open ended questions on academics in different management positions within university faculties of agriculture. Summative analysis was thereafter used during coding and interpretation of the data. Study findings show that there are several important elements for making offerings more comprehendible towards fostering faculty growth and performance with bias towards student enrolment. The results points towards BOS form of value innovations with various elements to consider in faculty offerings. To create valued innovation beyond the red oceans, the cases in this study have to be modelled to foster changes in enrolment, modes of delivery, certification, being research oriented with excellence in teaching, ethics, service to the community and entrepreneurship. There is, therefore, need to rethink strategy towards reshaping inclusive enrolment, industry relevance, affiliations, lifelong learning, sustainable student welfare, ubuntu, exchange programmes, research excellence, alumni support and entrepreneurship. Innovative strategic collaborations and partnerships, anchored on technology boost the strategic offerings henceforth leveraging on various offerings in this study. Areas of further study include the amplitude of blue oceans shown in the university faculty offerings and implementation strategies of BOS.

Keywords: blue oceans strategy, collaborations, faculty offerings, value innovations

Procedia PDF Downloads 149
25599 Robust Attitude Control for Agile Satellites with Vibration Compensation

Authors: Jair Servín-Aguilar, Yu Tang

Abstract:

We address the problem of robust attitude tracking for agile satellites under unknown bounded torque disturbances using a double-gimbal variable-speed control-moment gyro (DGVSCMG) driven by a cluster of three permanent magnet synchronous motors (PMSMs). Uniform practical asymptotic stability is achieved at the torque control level first. The desired speed of gimbals and the acceleration of the spin wheel to produce the required torque are then calculated by a velocity-based steering law and tracked at the PMSM speed-control level by designing a speed-tracking controller with compensation for the vibration caused by eccentricity and imbalance due to mechanical imperfection in the DGVSCMG. Uniform practical asymptotic stability of the overall system is ensured by loan relying on the analysis of the resulting cascaded system. Numerical simulations are included to show the performance improvement of the proposed controller.

Keywords: agile satellites, vibration compensation, internal model, stability

Procedia PDF Downloads 117
25598 Homogeneous Anti-Corrosion Coating of Spontaneously Dissolved Defect-Free Graphene

Authors: M. K. Bin Subhan, P. Cullen, C. Howard

Abstract:

A recent study by the World Corrosion Organization estimated that corrosion related damage causes $2.5tr worth of damage every year. As such, a low cost easily scalable solution is required to the corrosion problem which is economically viable. Graphene is an ideal anti-corrosion barrier layer material due to its excellent barrier properties and chemical stability, which makes it impermeable to all molecules. However, attempts to employ graphene as a barrier layer has been hampered by the fact that defect sites in graphene accelerate corrosion due to the inert nature of graphene which promotes galvanic corrosion at the expense of the metal. The recent discovery of spontaneous dissolution of charged graphite intercalation compounds in aprotic solvents enables defect free graphene platelets to be employed for anti-corrosion applications. These ‘inks’ of defect-free charged graphene platelets in solution can be coated onto a metallic surfaces via electroplating to form a homogeneous barrier layer. In this paper, initial data showing homogeneous coatings of graphene barrier layers on steel coupons via electroplating will be presented. This easily scalable technique also provides a controllable method for applying different barrier thicknesses from ultra thin layers to thick opaque coatings making it useful for a wide range of applications.

Keywords: anti-corrosion, defect-free, electroplating, graphene

Procedia PDF Downloads 134
25597 New Security Approach of Confidential Resources in Hybrid Clouds

Authors: Haythem Yahyaoui, Samir Moalla, Mounir Bouden, Skander ghorbel

Abstract:

Nowadays, Cloud environments are becoming a need for companies, this new technology gives the opportunities to access to the data anywhere and anytime, also an optimized and secured access to the resources and gives more security for the data which stored in the platform, however, some companies do not trust Cloud providers, in their point of view, providers can access and modify some confidential data such as bank accounts, many works have been done in this context, they conclude that encryption methods realized by providers ensure the confidentiality, although, they forgot that Cloud providers can decrypt the confidential resources. The best solution here is to apply some modifications on the data before sending them to the Cloud in the objective to make them unreadable. This work aims on enhancing the quality of service of providers and improving the trust of the customers.

Keywords: cloud, confidentiality, cryptography, security issues, trust issues

Procedia PDF Downloads 382
25596 Improving Knowledge Management Practices in the South African Healthcare System

Authors: Kgabo H. Badimo, Sheryl Buckley

Abstract:

Knowledge is increasingly recognised in this, the knowledge era, as a strategic resource, by public sector organisations, in view of the public sector reform initiatives. People and knowledge play a vital role in attaining improved organisational performance and high service quality. Many government departments in the public sector have started to realise the importance of knowledge management in streamlining their operations and processes. This study focused on knowledge management in the public healthcare service organisations, where the concept of service provider competitiveness pales to insignificance, considering the huge challenges emanating from the healthcare and public sector reforms. Many government departments are faced with challenges of improving organisational performance and service delivery, improving accountability, making informed decisions, capturing the knowledge of the aging workforce, and enhancing partnerships with stakeholders. The purpose of this paper is to examine the knowledge management practices of the Gauteng Department of Health in South Africa, in order to understand how knowledge management practices influence improvement in organisational performance and healthcare service delivery. This issue is explored through a review of literature on dominant views on knowledge management and healthcare service delivery, as well as results of interviews with, and questionnaire responses from, the general staff of the Gauteng Department of Health. Web-based questionnaires, face-to-face interviews and organisational documents were used to collect data. The data were analysed using both the quantitative and qualitative methods. The central question investigated was: To what extent can the conditions required for successful knowledge management be observed, in order to improve organisational performance and healthcare service delivery in the Gauteng Department of Health. The findings showed that the elements of knowledge management capabilities investigated in this study, namely knowledge creation, knowledge sharing and knowledge application, have a positive, significant relationship with all measures of organisational performance and healthcare service delivery. These findings thus indicate that by employing knowledge management principles, the Gauteng Department of Health could improve its ability to achieve its operational goals and objectives, and solve organisational and healthcare challenges, thereby improving organisational.

Keywords: knowledge management, Healthcare Service Delivery, public healthcare, public sector

Procedia PDF Downloads 275
25595 Disclosing a Patriarchal Society: A Socio-Legal Study on the Indigenous Women's Involvement in Natural Resources Management in Kasepuhan Cirompang

Authors: Irena Lucy Ishimora, Eva Maria Putri Salsabila

Abstract:

The constellation on Indonesian Legal System that varies shows a structural injustice – as a result of patriarchy – exists from the biggest range as a country to the smallest such as a family. Women in their lives, carry out excessive responsibilities in the community. However, the unequal positions between men and women in the society restrain women to fulfill their constructed role. Therefore, increasing the chance for women to become the victim of structural injustice. The lack of authority given to women and its effects can be seen through a case study of the Cirompang Indigenous Women’s involvement in natural resources management. The decision to make the Mount Halimun-Salak as a National Park and the expansion itself did not involve nor consider the existence of indigenous people (Kasepuhan Ciromopang) – especially the women’s experience regarding natural resources management – has been significantly impacting the fulfillment of the indigenous women’s rights. Moreover, the adat law that still reflects patriarchy, made matters worse because women are restricted from expressing their opinion. The writers explored the experience of Cirompang indigenous women through in-depth interviews with them and analyzed it with several theories such as ecofeminism, woman’s access to land and legal pluralism. This paper is important to show how the decision and expansion of the National Park reduced the rights of access to land, natural resources, expressing an opinion, and participating in development. Reflecting on the Cirompang Indigenous Women’s conditions on natural resources management, this paper aims to present the implications of the regulations that do not acknowledge Indigenous women’s experience and the proposed solutions. First, there should be an integration between the law regarding indigenous people and traditional rights in a regulation to align the understanding of indigenous people and their rights. Secondly, Indonesia as a country that’s rich with diversity should ratify the ILO Convention no 169 to reaffirm the protection of Indigenous people’s rights. Last, considering the position of indigenous women that still experienced unjustness in the community, the government and NGOs must collaborate to provide adequate assistance for them.

Keywords: Cirompang indigenous women, indigenous women’s rights, structural injustice, women access to land

Procedia PDF Downloads 220
25594 Estimation of Chronic Kidney Disease Using Artificial Neural Network

Authors: Ilker Ali Ozkan

Abstract:

In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.

Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis

Procedia PDF Downloads 449
25593 Transferable Knowledge: Expressing Lessons Learnt from Failure to Outsiders

Authors: Stijn Horck

Abstract:

Background: The value of lessons learned from failure increases when these insights can be put to use by those who did not experience the failure. While learning from others has mostly been researched between individuals or teams within the same environment, transferring knowledge from the person who experienced the failure to an outsider comes with extra challenges. As sense-making of failure is an individual process leading to different learning experiences, the potential of lessons learned from failure is highly variable depending on who is transferring the lessons learned. Using an integrated framework of linguistic aspects related to attributional egotism, this study aims to offer a complete explanation of the challenges in transferring lessons learned from failures that are experienced by others. Method: A case study of a failed foundation established to address the information needs for GPs in times of COVID-19 has been used. An overview of failure causes and lessons learned were made through a preliminary analysis of data collected in two phases with metaphoric examples of failure types. This was followed up by individual narrative interviews with the board members who have all experienced the same events to analyse the individual variance of lessons learned through discourse analysis. This research design uses the researcher-as-instrument approach since the recipient of these lessons learned is the author himself. Results: Thirteen causes were given why the foundation has failed, and nine lessons were formulated. Based on the individually emphasized events, the explanation of the failure events mentioned by all or three respondents consisted of more linguistic aspects related to attributional egotism than failure events mentioned by only one or two. Moreover, the learning events mentioned by all or three respondents involved lessons learned that are based on changed insight, while the lessons expressed by only one or two are more based on direct value. Retrospectively, the lessons expressed as a group in the first data collection phase seem to have captured some but not all of the direct value lessons. Conclusion: Individual variance in expressing lessons learned to outsiders can be reduced using metaphoric or analogical explanations from a third party. In line with the attributional egotism theory, individuals separated from a group that has experienced the same failure are more likely to refer to failure causes of which the chances to be contradicted are the smallest. Lastly, this study contributes to the academic literature by demonstrating that the use of linguistic analysis is suitable for investigating the knowledge transfer from lessons learned after failure.

Keywords: failure, discourse analysis, knowledge transfer, attributional egotism

Procedia PDF Downloads 119
25592 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 74
25591 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 331
25590 Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data

Authors: LuoJiaoyang, Yu Hongyang

Abstract:

In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%.

Keywords: multimodal, three modalities, RGB-D, identity verification

Procedia PDF Downloads 75
25589 Struggles of Non-Binary People in an Organizational Setting in Iceland

Authors: Kevin Henry

Abstract:

Introduction: This research identifies the main struggles of non-binary people in an organizational setting using the ZMET – method of in-depth interviews. The research was done in Iceland, a country that is repeatedly listed in the top countries for gender equality and found three main categories of non-binary struggles in organizations. These categories can be used to improve organizational non-binary inclusion. Aim: The main questions this paper will answer are: Which unique obstacles are non-binary people facing in their daily organizational life? Which organizational and individual measures help with more inclusion of non-binary people? How can organizational gender equality measures be made more inclusive of non-binary issues? Background: Even though gender equality is a much-researched topic, the struggles of non-binary people are often overlooked in gender equality research. Additionally, non-binary and transgender people are frequently researched together, even though their struggles can be very different. Research focused on non-binary people is, in many cases, done on a more structural or organizational level with quantitative data such as salary or position within an organization. This research focuses on the individual and their struggles with qualitative data to derive measures for non-binary inclusion and equality. Method: An adapted approach of the ZMET-Method (Zaltman Metaphor Elicitation Technique) will be used, during which in-depth interviews are held with individuals, utilizing pictures as a metaphorical starting point to discuss their main thoughts and feelings on being non-binary in an organizational setting. Interviewees prepared five pictures, each representing one key thought or feeling about their organizational life. The interviewer then lets the interviewee describe each picture and asks probing questions to get a deeper understanding of each individual topic. This method helps with a mostly unbiased data collection process by only asking probing questions during the interview and not leading the interviewee in any certain direction. Results: This research has identified three main categories of struggles non-binary people are facing in an organizational setting: internal (personal) struggles, external struggles and structural struggles. Internal struggles refer to struggles that originate from the person themselves (e.g., struggles with their own identity). External struggles refer to struggles from the outside (e.g. harassment from coworkers, exclusion). Structural struggles refer to struggles that are built into the organizational policy or facilities (e.g. restrooms, gendered language). Conclusion: This study shows that there are many struggles for non-binary people in organizations and that even in countries that pride themselves on being progressive and having a high level of gender equality, there is still much to be done for non-binary inclusion. Implications for Organizations: Organizations that strive to improve the inclusion of all genders should pay attention to how their structures are built, how their training is conducted, and how their policies affect people of various genders. Simple changes like making restrooms gender-neutral and using neutral language in company communications are good examples of small structural steps for more inclusion.

Keywords: gender equality, non-binary, organizations, ZMET

Procedia PDF Downloads 48
25588 Effectiveness of Teacher Training in Bangladeshi Context

Authors: Sabina Mohsin

Abstract:

The need for grounding of teachers and the trend of using innovative ways to deal with students of various abilities in schools, colleges and universities has always been essential in any part of the world. Teacher edification programs, and qualifications standards, all too repeatedly lack enough rigidity, extensiveness and profundity, resulting in high levels of unskilled teachers and squat student performance. Accordingly, the solution, from this viewpoint, lies in making the entry and training necessities for teaching deeper and more exact. Teachers’ continuous professional development is necessary to reach all kinds of learners in class. The training provided is a direct opportunity for new teachers to interact better and motivate students in a two way discussion class. The intention of the study was to scrutinize whether the teachers’ training played an important role to fabricate lectures and classroom activities and reflected the objectives of the training provided in various schools and universities. It also aims to examine the current practices used in the various teacher training programs and if there is any other method that can be associated to enhance the effectiveness of these programs further. This research uses qualitative data collected from interviews, peer discussions, classroom observations, reviews, feedback of students and teachers to study teacher training and teaching methods used in school and universities in Bangladesh. The study finds teacher training to be effective though it has some limitations. It also includes some suggestions to make teacher training more effective.

Keywords: current practices in teacher training, enhancing effectiveness, limitation, student motivation, teacher training

Procedia PDF Downloads 443
25587 Spatial Development of Muslim Cemetery in Kuala Lumpur Metropolitan: A Focus on Sustainable Design Practice

Authors: Mohamad Reza Mohamed Afla, Putri Haryati Ibrahim, Azila Ahmad Sarkawi

Abstract:

This study examines the standard procedure involved in terms of planning and management at selected Muslim cemeteries within the Kuala Lumpur Metropolitan Area. It focuses on sustainable design practice for the provision of burial infrastructures at public cemeteries, which emphasizes the inclusion of society, economy, and environment. The escalating issues of overcrowding, lack of space, and land shortage for full-body burial in the urbanized area of Kuala Lumpur have raised a concern to this alarming situation. There is a necessity to address these problems through the incorporation of sustainable development in the making of urban cemeteries to ensure a holistic approach. Recorded site observation of cemeteries’ area has been employed as a means of data collection and interpreted by conducting spatial analysis. The spatial analysis entails the assessment of form and function in accordance with sustainable design principles. The finding exhibits the dimensional layout of Muslim cemeteries were problematics due to the tension that exists between ritual practices and space organization set-up by the local authorities. This article concludes by providing conceptual guidelines for the purpose of Muslim cemetery development in the future.

Keywords: cemetery, metropolitan, spatial analysis, sustainable design practice

Procedia PDF Downloads 123
25586 Asymmetric Relation between Earnings and Returns

Authors: Seungmin Chee

Abstract:

This paper investigates which of the two arguments, conservatism or liquidation option, is a true underlying driver of the asymmetric slope coefficient result regarding the association between earnings and returns. The analysis of the relation between earnings and returns in four mutually exclusive settings segmented by ‘profits vs. losses’ and ‘positive returns vs. negative returns’ suggests that liquidation option rather than conservatism is likely to cause the asymmetric slope coefficient result. Furthermore, this paper documents the temporal changes between Basu period (1963-1990) and post-Basu period (1990-2005). Although no significant change in degree of conservatism or value relevance of losses is reported, stronger negative relation between losses and positive returns is observed in the post-Basu period. Separate regression analysis of each quintile based on the rankings of price to sales ratio and book to market ratio suggests that the strong negative relation is driven by growth firms.

Keywords: conservatism, earnings, liquidation option, returns

Procedia PDF Downloads 378
25585 Non-Linear Causality Inference Using BAMLSS and Bi-CAM in Finance

Authors: Flora Babongo, Valerie Chavez

Abstract:

Inferring causality from observational data is one of the fundamental subjects, especially in quantitative finance. So far most of the papers analyze additive noise models with either linearity, nonlinearity or Gaussian noise. We fill in the gap by providing a nonlinear and non-gaussian causal multiplicative noise model that aims to distinguish the cause from the effect using a two steps method based on Bayesian additive models for location, scale and shape (BAMLSS) and on causal additive models (CAM). We have tested our method on simulated and real data and we reached an accuracy of 0.86 on average. As real data, we considered the causality between financial indices such as S&P 500, Nasdaq, CAC 40 and Nikkei, and companies' log-returns. Our results can be useful in inferring causality when the data is heteroskedastic or non-injective.

Keywords: causal inference, DAGs, BAMLSS, financial index

Procedia PDF Downloads 155
25584 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 140
25583 Managing Incomplete PSA Observations in Prostate Cancer Data: Key Strategies and Best Practices for Handling Loss to Follow-Up and Missing Data

Authors: Madiha Liaqat, Rehan Ahmed Khan, Shahid Kamal

Abstract:

Multiple imputation with delta adjustment is a versatile and transparent technique for addressing univariate missing data in the presence of various missing mechanisms. This approach allows for the exploration of sensitivity to the missing-at-random (MAR) assumption. In this review, we outline the delta-adjustment procedure and illustrate its application for assessing the sensitivity to deviations from the MAR assumption. By examining diverse missingness scenarios and conducting sensitivity analyses, we gain valuable insights into the implications of missing data on our analyses, enhancing the reliability of our study's conclusions. In our study, we focused on assessing logPSA, a continuous biomarker in incomplete prostate cancer data, to examine the robustness of conclusions against plausible departures from the MAR assumption. We introduced several approaches for conducting sensitivity analyses, illustrating their application within the pattern mixture model (PMM) under the delta adjustment framework. This proposed approach effectively handles missing data, particularly loss to follow-up.

Keywords: loss to follow-up, incomplete response, multiple imputation, sensitivity analysis, prostate cancer

Procedia PDF Downloads 94
25582 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 61