Search results for: linked data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26357

Search results for: linked data

25997 Identification of Nutrient Sensitive Signaling Pathways via Analysis of O-GlcNAcylation

Authors: Michael P. Mannino, Gerald W. Hart

Abstract:

The majority of glucose metabolism proceeds through glycolytic pathways such as glycolysis or pentose phosphate pathway, however, about 5% is shunted through the hexosamine biosynthetic pathway, producing uridine diphosphate N-acetyl glucosamine (UDP-GlcNAc). This precursor can then be incorporated into complex oligosaccharides decorating the cell surface or remain as an intracellular post-translational-modification (PTM) of serine/threonine residues (O-GlcNAcylation, OGN), which has been identified on over 4,000 cytosolic or nuclear proteins. Intracellular OGN has major implications on cellularprocesses, typically by modulating protein localization, protein-protein interactions, protein degradation, and gene expression. Additionally, OGN is known to have an extensive cross-talk with phosphorylation, be in a competitive or cooperative manner. Unlike other PTMs there are only two cycling enzymes that are capable of adding or removing the GlcNAc moiety, O-linked N-aceytl glucosamine Transferase (OGT) and O-linked N-acetyl glucoamidase (OGA), respectively. The activity of OGT has been shown to be sensitive to cellular UDP-GlcNAc levels, even changing substrate affinity. Owing to this and that the concentration of UDP-GlcNAc is related to the metabolisms of glucose, amino acid, fatty acid, and nucleotides, O-GlcNAc is often referred to as a nutrient sensing rheostat. Indeed OGN is known to regulate several signaling pathways as a result of nutrient levels, such as insulin signaling. Dysregulation of OGN is associated with several disease states such as cancer, diabetes, and neurodegeneration. Improvements in glycomics over the past 10-15 years has significantly increased the OGT substrate pool, suggesting O-GlcNAc’s involvement in a wide variety of signaling pathways. However, O-GlcNAc’s role at the receptor level has only been identified in a case-by-case basis of known pathways. Examining the OGN of the plasma membrane (PM) may better focus our understanding of O-GlcNAc-effected signaling pathways. In this current study, PM fractions were isolated from several cell types via ultracentrifugation, followed by purification and MS/MS analysis in several cell lines. This process was repeated with or without OGT/OGA inhibitors or with increased/decreased glucose levels in media to ascertain the importance of OGN. Various pathways are followed up on in more detailed studies employing methods to localize OGN at the PM specifically.

Keywords: GlcNAc, nutrient sensitive, post-translational-modification, receptor

Procedia PDF Downloads 113
25996 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 443
25995 Framework for Integrating Big Data and Thick Data: Understanding Customers Better

Authors: Nikita Valluri, Vatcharaporn Esichaikul

Abstract:

With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.

Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data

Procedia PDF Downloads 164
25994 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 312
25993 Open Data for e-Governance: Case Study of Bangladesh

Authors: Sami Kabir, Sadek Hossain Khoka

Abstract:

Open Government Data (OGD) refers to all data produced by government which are accessible in reusable way by common people with access to Internet and at free of cost. In line with “Digital Bangladesh” vision of Bangladesh government, the concept of open data has been gaining momentum in the country. Opening all government data in digital and customizable format from single platform can enhance e-governance which will make government more transparent to the people. This paper presents a well-in-progress case study on OGD portal by Bangladesh Government in order to link decentralized data. The initiative is intended to facilitate e-service towards citizens through this one-stop web portal. The paper further discusses ways of collecting data in digital format from relevant agencies with a view to making it publicly available through this single point of access. Further, possible layout of this web portal is presented.

Keywords: e-governance, one-stop web portal, open government data, reusable data, web of data

Procedia PDF Downloads 357
25992 Resource Framework Descriptors for Interestingness in Data

Authors: C. B. Abhilash, Kavi Mahesh

Abstract:

Human beings are the most advanced species on earth; it's all because of the ability to communicate and share information via human language. In today's world, a huge amount of data is available on the web in text format. This has also resulted in the generation of big data in structured and unstructured formats. In general, the data is in the textual form, which is highly unstructured. To get insights and actionable content from this data, we need to incorporate the concepts of text mining and natural language processing. In our study, we mainly focus on Interesting data through which interesting facts are generated for the knowledge base. The approach is to derive the analytics from the text via the application of natural language processing. Using semantic web Resource framework descriptors (RDF), we generate the triple from the given data and derive the interesting patterns. The methodology also illustrates data integration using the RDF for reliable, interesting patterns.

Keywords: RDF, interestingness, knowledge base, semantic data

Procedia PDF Downloads 166
25991 Interannual Variations in Snowfall and Continuous Snow Cover Duration in Pelso, Central Finland, Linked to Teleconnection Patterns, 1944-2010

Authors: M. Irannezhad, E. H. N. Gashti, S. Mohammadighavam, M. Zarrini, B. Kløve

Abstract:

Climate warming would increase rainfall by shifting precipitation falling form from snow to rain, and would accelerate snow cover disappearing by increasing snowpack. Using temperature and precipitation data in the temperature-index snowmelt model, we evaluated variability of snowfall and continuous snow cover duration(CSCD) during 1944-2010 over Pelso, central Finland. MannKendall non-parametric test determined that annual precipitation increased by 2.69 (mm/year, p<0.05) during the study period, but no clear trend in annual temperature. Both annual rainfall and snowfall increased by 1.67 and 0.78 (mm/year, p<0.05), respectively. CSCD was generally about 205 days from 14 October to 6 May. No clear trend was found in CSCD over Pelso. Spearman’s rank correlation showed most significant relationships of annual snowfall with the East Atlantic (EA) pattern, and CSCD with the East Atlantic/West Russia (EA/WR) pattern. Increased precipitation with no warming temperature caused the rainfall and snowfall to increase, while no effects on CSCD.

Keywords: variations, snowfall, snow cover duration, temperature-index snowmelt model, teleconnection patterns

Procedia PDF Downloads 225
25990 Data Mining Practices: Practical Studies on the Telecommunication Companies in Jordan

Authors: Dina Ahmad Alkhodary

Abstract:

This study aimed to investigate the practices of Data Mining on the telecommunication companies in Jordan, from the viewpoint of the respondents. In order to achieve the goal of the study, and test the validity of hypotheses, the researcher has designed a questionnaire to collect data from managers and staff members from main department in the researched companies. The results shows improvements stages of the telecommunications companies towered Data Mining.

Keywords: data, mining, development, business

Procedia PDF Downloads 500
25989 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 201
25988 Employment Discrimination on Civil Servant Recruitment

Authors: Li Lei, Jia Jidong

Abstract:

Employment right is linked to the people’s livelihood in our society. As a most important and representative part in the labor market, the employment of public servants is always taking much attention. But the discrimination in the employment of public servants has always existed and, to become a controversy in our society. The paper try to discuss this problem from four parts as follows: First, the employment of public servants has a representative status in our labor market. The second part is about the discrimination in the employment of public servants. The third part is about the right of equality and its significance. The last part is to analysis the legal predicament about discrimination in the employment of public servants in China.

Keywords: discrimination, employment of public servants, right of labor, law

Procedia PDF Downloads 408
25987 The Impact of System and Data Quality on Organizational Success in the Kingdom of Bahrain

Authors: Amal M. Alrayes

Abstract:

Data and system quality play a central role in organizational success, and the quality of any existing information system has a major influence on the effectiveness of overall system performance.Given the importance of system and data quality to an organization, it is relevant to highlight their importance on organizational performance in the Kingdom of Bahrain. This research aims to discover whether system quality and data quality are related, and to study the impact of system and data quality on organizational success. A theoretical model based on previous research is used to show the relationship between data and system quality, and organizational impact. We hypothesize, first, that system quality is positively associated with organizational impact, secondly that system quality is positively associated with data quality, and finally that data quality is positively associated with organizational impact. A questionnaire was conducted among public and private organizations in the Kingdom of Bahrain. The results show that there is a strong association between data and system quality, that affects organizational success.

Keywords: data quality, performance, system quality, Kingdom of Bahrain

Procedia PDF Downloads 499
25986 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 142
25985 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 482
25984 Cross-border Data Transfers to and from South Africa

Authors: Amy Gooden, Meshandren Naidoo

Abstract:

Genetic research and transfers of big data are not confined to a particular jurisdiction, but there is a lack of clarity regarding the legal requirements for importing and exporting such data. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.

Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa

Procedia PDF Downloads 128
25983 Relationship between Conformity to Masculine Role Norms and Depression in Vietnamese Male Students in College

Authors: To Que Nga

Abstract:

College-bound males may experience considerable maladjustment during the crucial developmental time between high school and college. By participating in stereotypically male actions, men may feel under pressure to "prove" their masculinity, which may be harmful to their general well-being. Although adherence to multidimensional male standards has been linked to worse mental health, no research has considered the impact of these norms on college men's potential depressive symptoms. A viable theoretical framework to explain within-group variation in depression symptomatology can be provided by longitudinally examining college men's adherence to multidimensional masculine standards. An overview of recent studies on the connection between masculine norms and depression among Vietnamese men in college is given in this article. 208 males from different Hanoi colleges were included in the study. Male norms were evaluated at the start of their first semester. Six months following the initial round of data collection, depressive symptomatology was evaluated. Men who approved the male norms of Self-Reliance, Playboy, and Power Over Women showed a positive relationship between masculine norms and depression scores. The impact of multidimensional masculine norms on college men's depressive symptomatology was first examined in this study. The findings imply that professionals who interact with males should think about determining whether their clients conform to particular masculine standards and investigating how these could be affecting their present mental health.

Keywords: masculinity, conformity to masculinity, depression, psycho-social issues, men, college

Procedia PDF Downloads 71
25982 The Study of Security Techniques on Information System for Decision Making

Authors: Tejinder Singh

Abstract:

Information system is the flow of data from different levels to different directions for decision making and data operations in information system (IS). Data can be violated by different manner like manual or technical errors, data tampering or loss of integrity. Security system called firewall of IS is effected by such type of violations. The flow of data among various levels of Information System is done by networking system. The flow of data on network is in form of packets or frames. To protect these packets from unauthorized access, virus attacks, and to maintain the integrity level, network security is an important factor. To protect the data to get pirated, various security techniques are used. This paper represents the various security techniques and signifies different harmful attacks with the help of detailed data analysis. This paper will be beneficial for the organizations to make the system more secure, effective, and beneficial for future decisions making.

Keywords: information systems, data integrity, TCP/IP network, vulnerability, decision, data

Procedia PDF Downloads 310
25981 Data Integration with Geographic Information System Tools for Rural Environmental Monitoring

Authors: Tamas Jancso, Andrea Podor, Eva Nagyne Hajnal, Peter Udvardy, Gabor Nagy, Attila Varga, Meng Qingyan

Abstract:

The paper deals with the conditions and circumstances of integration of remotely sensed data for rural environmental monitoring purposes. The main task is to make decisions during the integration process when we have data sources with different resolution, location, spectral channels, and dimension. In order to have exact knowledge about the integration and data fusion possibilities, it is necessary to know the properties (metadata) that characterize the data. The paper explains the joining of these data sources using their attribute data through a sample project. The resulted product will be used for rural environmental analysis.

Keywords: remote sensing, GIS, metadata, integration, environmental analysis

Procedia PDF Downloads 123
25980 Assessment of Reservoir Quality and Heterogeneity in Middle Buntsandstein Sandstones of Southern Netherlands for Deep Geothermal Exploration

Authors: Husnain Yousaf, Rudy Swennen, Hannes Claes, Muhammad Amjad

Abstract:

In recent years, the Lower Triassic Main Buntsandstein sandstones in the southern Netherlands Basins have become a point of interest for their deep geothermal potential. To identify the most suitable reservoir for geothermal exploration, the diagenesis and factors affecting reservoir quality, such as porosity and permeability, are assessed. This is done by combining point-counted petrographic data with conventional core analysis. The depositional environments play a significant role in determining the distribution of lithofacies, cement, clays, and grain sizes. The position in the basin and proximity to the source areas determine the lateral variability of depositional environments. The stratigraphic distribution of depositional environments is linked to both local topography and climate, where high humidity leads to fluvial deposition and high aridity periods lead to aeolian deposition. The Middle Buntsandstein Sandstones in the southern part of the Netherlands shows high porosity and permeability in most sandstone intervals. There are various controls on reservoir quality in the examined sandstone samples. Grain sizes and total quartz content are the primary factors affecting reservoir quality. Conversely, carbonate and anhydrite cement, clay clasts, and intergranular clay represent a local control and cannot be applied on a regional scale. Similarly, enhanced secondary porosity due to feldspar dissolution is locally restricted and minor. The analysis of textural, mineralogical, and petrophysical data indicates that the aeolian and fluvial sandstones represent a heterogeneous reservoir system. The ephemeral fluvial deposits have an average porosity and permeability of <10% and <1mD, respectively, while the aeolian sandstones exhibit values of >18% and >100mD.

Keywords: reservoir quality, diagenesis, porosity, permeability, depositional environments, Buntsandstein, Netherlands

Procedia PDF Downloads 66
25979 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic

Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi

Abstract:

In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.

Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing

Procedia PDF Downloads 300
25978 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data

Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin

Abstract:

Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.

Keywords: big data, machine learning, ontology model, urban data model

Procedia PDF Downloads 422
25977 Data-driven Decision-Making in Digital Entrepreneurship

Authors: Abeba Nigussie Turi, Xiangming Samuel Li

Abstract:

Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.

Keywords: startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship

Procedia PDF Downloads 330
25976 Discrepant Views of Social Competence and Links with Social Phobia

Authors: Pamela-Zoe Topalli, Niina Junttila, Päivi M. Niemi, Klaus Ranta

Abstract:

Adolescents’ biased perceptions about their social competence (SC), whether negatively or positively, serve to influence their socioemotional adjustment such as early feelings of social phobia (nowadays referred to as Social Anxiety Disorder-SAD). Despite the importance of biased self-perceptions in adolescents’ psychosocial adjustment, the extent to which discrepancies between self- and others’ evaluations of one’s SC are linked to social phobic symptoms remains unclear in the literature. This study examined the perceptual discrepancy profiles between self- and peers’ as well as between self- and teachers’ evaluations of adolescents’ SC and the interrelations of these profiles with self-reported social phobic symptoms. The participants were 390 3rd graders (15 years old) of Finnish lower secondary school (50.8% boys, 49.2% girls). In contrast with variable-centered approaches that have mainly been used by previous studies when focusing on this subject, this study used latent profile analysis (LPA), a person-centered approach which can provide information regarding risk profiles by capturing the heterogeneity within a population and classifying individuals into groups. LPA revealed the following five classes of discrepancy profiles: i) extremely negatively biased perceptions of SC, ii) negatively biased perceptions of SC, iii) quite realistic perceptions of SC, iv) positively biased perceptions of SC, and v) extremely positively biased perceptions of SC. Adolescents with extremely negatively biased perceptions and negatively biased perceptions of their own SC reported the highest number of social phobic symptoms. Adolescents with quite realistic, positively biased and extremely positively biased perceptions reported the lowest number of socio-phobic symptoms. The results point out the negatively and the extremely negatively biased perceptions as possible contributors to social phobic symptoms. Moreover, the association of quite realistic perceptions with low number of social phobic symptoms indicates its potential protective power against social phobia. Finally, positively and extremely positively biased perceptions of SC are negatively associated with social phobic symptoms in this study. However, the profile of extremely positively biased perceptions might be linked as well with the existence of externalizing problems such as antisocial behavior (e.g. disruptive impulsivity). The current findings highlight the importance of considering discrepancies between self- and others’ perceptions of one’s SC in clinical and research efforts. Interventions designed to prevent or moderate social phobic symptoms need to take into account individual needs rather than aiming for uniform treatment. Implications and future directions are discussed.

Keywords: adolescence, latent profile analysis, perceptual discrepancies, social competence, social phobia

Procedia PDF Downloads 248
25975 Impact of Organizational Citizenship Behavior on Employee Performance: Mediating Role of Counterproductive Work Behavior in Hotel Industry of Pakistan

Authors: Kashif Mahmood, Tehreem Fatima, Adeel Hassan

Abstract:

Firms are always concerned with their performance which is directly linked to employees’ performance. In the thrive of this goal, number of researches have been conducted where Organizational Citizenship Behavior (OCB) and Counterproductive Work Behavior (CPWB) is among those studies. This study is aimed at investigating the role OCB by considering altruism and conscientiousness in an employee’s job performance with the mediating role of CPWB by considering sabotage and withdraw among the employees of hotel industry in Pakistan. A quantitative method was used by following deductive approach in positivist paradigm where survey was conducted through self-administered questionnaires and data was collected from the employees working in hotel industry of Pakistan. Top 10 hotels from the region of Lahore, Punjab was selected as population, and 500 questionnaires were distributed among their employees by using stratified random sampling technique. There is a positive impact of OCB is found on job performance of an employee whereas full mediation of CPWB is also found between OCB and job performance. The study is important for the practitioners in a way that hotel industry is growing at an enormous rate where employee behavior is always a concern specifically in emerging markets due to the exploitation of employees at the workplace, so the findings of the study can be helpful for practitioners and policy makers.

Keywords: organizational citizenship behavior, counterproductive work behavior, employee performance, altruism, conscientiousness, sabotage, withdraw, hotel industry

Procedia PDF Downloads 234
25974 Woody Carbon Stock Potentials and Factor Affecting Their Storage in Munessa Forest, Southern Ethiopia

Authors: Mojo Mengistu Gelasso

Abstract:

The tropical forest is considered the most important forest ecosystem for mitigating climate change by sequestering a high amount of carbon. The potential carbon stock of the forest can be influenced by many factors. Therefore, studying these factors is crucial for understanding the determinants that affect the potential for woody carbon storage in the forest. This study was conducted to evaluate the potential for woody carbon stock and how it varies based on plant community types, as well as along altitudinal, slope, and aspect gradients in the Munessa dry Afromontane forest. Vegetation data was collected using systematic sampling. Five line transects were established at 100 m intervals along the altitudinal gradient between two consecutive transect lines. On each transect, 10 quadrats (20 x 20 m), separated by 200 m, were established. The woody carbon was estimated using an appropriate allometric equation formulated for tropical forests. The data was analyzed using one-way ANOVA in R software. The results showed that the total woody carbon stock of the Munessa forest was 210.43 ton/ha. The analysis of variance revealed that woody carbon density varied significantly based on environmental factors, while community types had no significant effect. The highest mean carbon stock was found at middle altitudes (2367-2533 m.a.s.l), lower slopes (0-13%), and west-facing aspects. The Podocarpus falcatus-Croton macrostachyus community type also contributed a higher woody carbon stock, as larger tree size classes and older trees dominated it. Overall, the potential for woody carbon sequestration in this study was strongly associated with environmental variables. Additionally, the uneven distribution of species with larger diameter at breast height (DBH) in the study area might be linked to anthropogenic factors, as the current forest growth indicates characteristics of a secondary forest. Therefore, our study suggests that the development and implementation of a sustainable forest management plan is necessary to increase the carbon sequestration potential of this forest and mitigate climate change.

Keywords: munessa forest, woody carbon stock, environmental factors, climate mitigation

Procedia PDF Downloads 83
25973 Woody Carbon Stock Potentials and Factor Affecting Their Storage in Munessa Forest, Southern Ethiopia

Authors: Mengistu Gelasso Mojo

Abstract:

The tropical forest is considered the most important forest ecosystem for mitigating climate change by sequestering a high amount of carbon. The potential carbon stock of the forest can be influenced by many factors. Therefore, studying these factors is crucial for understanding the determinants that affect the potential for woody carbon storage in the forest. This study was conducted to evaluate the potential for woody carbon stock and how it varies based on plant community types, as well as along altitudinal, slope, and aspect gradients in the Munessa dry Afromontane forest. Vegetation data was collected using systematic sampling. Five line transects were established at 100 m intervals along the altitudinal gradient between two consecutive transect lines. On each transect, 10 quadrats (20 x 20 m), separated by 200 m, were established. The woody carbon was estimated using an appropriate allometric equation formulated for tropical forests. The data was analyzed using one-way ANOVA in R software. The results showed that the total woody carbon stock of the Munessa forest was 210.43 ton/ha. The analysis of variance revealed that woody carbon density varied significantly based on environmental factors, while community types had no significant effect. The highest mean carbon stock was found at middle altitudes (2367-2533 m.a.s.l), lower slopes (0-13%), and west-facing aspects. The Podocarpus falcatus-Croton macrostachyus community type also contributed a higher woody carbon stock, as larger tree size classes and older trees dominated it. Overall, the potential for woody carbon sequestration in this study was strongly associated with environmental variables. Additionally, the uneven distribution of species with larger diameter at breast height (DBH) in the study area might be linked to anthropogenic factors, as the current forest growth indicates characteristics of a secondary forest. Therefore, our study suggests that the development and implementation of a sustainable forest management plan is necessary to increase the carbon sequestration potential of this forest and mitigate climate change.

Keywords: munessa forest, woody carbon stock, environmental factors, climate mitigation

Procedia PDF Downloads 91
25972 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 294
25971 Bringing Together Student Collaboration and Research Opportunities to Promote Scientific Understanding and Outreach Through a Seismological Community

Authors: Michael Ray Brunt

Abstract:

China has been the site of some of the most significant earthquakes in history; however, earthquake monitoring has long been the provenance of universities and research institutions. The China Digital Seismographic Network was initiated in 1983 and improved significantly during 1992-1993. Data from the CDSN is widely used by government and research institutions, and, generally, this data is not readily accessible to middle and high school students. An educational seismic network in China is needed to provide collaboration and research opportunities for students and engaging students around the country in scientific understanding of earthquake hazards and risks while promoting community awareness. In 2022, the Tsinghua International School (THIS) Seismology Team, made up of enthusiastic students and facilitated by two experienced teachers, was established. As a group, the team’s objective is to install seismographs in schools throughout China, thus creating an educational seismic network that shares data from the THIS Educational Seismic Network (THIS-ESN) and facilitates collaboration. The THIS-ESN initiative will enhance education and outreach in China about earthquake risks and hazards, introduce seismology to a wider audience, stimulate interest in research among students, and develop students’ programming, data collection and analysis skills. It will also encourage and inspire young minds to pursue science, technology, engineering, the arts, and math (STEAM) career fields. The THIS-ESN utilizes small, low-cost RaspberryShake seismographs as a powerful tool linked into a global network, giving schools and the public access to real-time seismic data from across China, increasing earthquake monitoring capabilities in the perspective areas and adding to the available data sets regionally and worldwide helping create a denser seismic network. The RaspberryShake seismograph is compatible with free seismic data viewing platforms such as SWARM, RaspberryShake web programs and mobile apps are designed specifically towards teaching seismology and seismic data interpretation, providing opportunities to enhance understanding. The RaspberryShake is powered by an operating system embedded in the Raspberry Pi, which makes it an easy platform to teach students basic computer communication concepts by utilizing processing tools to investigate, plot, and manipulate data. THIS Seismology Team believes strongly in creating opportunities for committed students to become part of the seismological community by engaging in analysis of real-time scientific data with tangible outcomes. Students will feel proud of the important work they are doing to understand the world around them and become advocates spreading their knowledge back into their homes and communities, helping to improve overall community resilience. We trust that, in studying the results seismograph stations yield, students will not only grasp how subjects like physics and computer science apply in real life, and by spreading information, we hope students across the country can appreciate how and why earthquakes bear on their lives, develop practical skills in STEAM, and engage in the global seismic monitoring effort. By providing such an opportunity to schools across the country, we are confident that we will be an agent of change for society.

Keywords: collaboration, outreach, education, seismology, earthquakes, public awareness, research opportunities

Procedia PDF Downloads 73
25970 Crossing Borders: In Research and Business Communication

Authors: Edith Podhovnik

Abstract:

Cultures play a role in business communication and in research. At the example of language in international business, this paper addresses the issue of how the research cultures of management research and linguistics as well as cultures as such can be linked. After looking at existing research on language in international business, this paper approaches communication in international business from a linguistic angle and attempts to explain communication issues in businesses based on linguistic research. Thus, the paper makes a step into cross-disciplinary research combining management research with linguistics.

Keywords: language in international business, sociolinguistics, ethnopragmatics, cultural scripts

Procedia PDF Downloads 635
25969 Cryptographic Protocol for Secure Cloud Storage

Authors: Luvisa Kusuma, Panji Yudha Prakasa

Abstract:

Cloud storage, as a subservice of infrastructure as a service (IaaS) in Cloud Computing, is the model of nerworked storage where data can be stored in server. In this paper, we propose a secure cloud storage system consisting of two main components; client as a user who uses the cloud storage service and server who provides the cloud storage service. In this system, we propose the protocol schemes to guarantee against security attacks in the data transmission. The protocols are login protocol, upload data protocol, download protocol, and push data protocol, which implement hybrid cryptographic mechanism based on data encryption before it is sent to the cloud, so cloud storage provider does not know the user's data and cannot analysis user’s data, because there is no correspondence between data and user.

Keywords: cloud storage, security, cryptographic protocol, artificial intelligence

Procedia PDF Downloads 360
25968 Decentralized Data Marketplace Framework Using Blockchain-Based Smart Contract

Authors: Meshari Aljohani, Stephan Olariu, Ravi Mukkamala

Abstract:

Data is essential for enhancing the quality of life. Its value creates chances for users to profit from data sales and purchases. Users in data marketplaces, however, must share and trade data in a secure and trusted environment while maintaining their privacy. The first main contribution of this paper is to identify enabling technologies and challenges facing the development of decentralized data marketplaces. The second main contribution is to propose a decentralized data marketplace framework based on blockchain technology. The proposed framework enables sellers and buyers to transact with more confidence. Using a security deposit, the system implements a unique approach for enforcing honesty in data exchange among anonymous individuals. Before the transaction is considered complete, the system has a time frame. As a result, users can submit disputes to the arbitrators which will review them and respond with their decision. Use cases are presented to demonstrate how these technologies help data marketplaces handle issues and challenges.

Keywords: blockchain, data, data marketplace, smart contract, reputation system

Procedia PDF Downloads 160