Search results for: market data
25690 Red Herring Innovation: Twelve Paradoxes of Innovation Ecosystem in a Closed Society
Authors: Mohammad Hossein Badamchi
Abstract:
In Iran as well as other developing countries instituting innovation and entrepreneurship ecosystems by government around the universities is the new imported fashion of modernization and development in the 21st century. In recent decade various statesmen, policy makers, university administrations, economists and development theorists are emphasizing excitingly about the new “start-ups” which are going to solve all economic problems and backwardness of the country. However, critical study of modernization practices in Iran implies that this new trend is suffering from conventional deficiency of modernization planning in 20th century. This article is going to depict the misunderstandings of a situation which we can name “Pseudo-innovation in a closed society” by presenting these 12 paradoxes of this new system, actually happening in Iran: (1) Innovation without freedom? Fiction of innovation in a patriarchal state (2) Entrepreneurship without free market? Fiction of entrepreneurship in a rentier-state. (3) Ecosystem or a state-glasshouse? Is it possible to make and plan an innovation? (4) Innovation; epistemic or practical? How academic innovation could happen abstractly out of context? (5) Risk and Lucre: innovation to protect power and property?! (6) Silicon-valley mirage: what is in common between American-Iranian polity? (7) Information or Communication? ICT startups to restrict the internet (8) The elite paradox: new proletariat of private sector, new governmental clerk or a new path of brain drain? (9) Innovation or commercialization? Revisiting Schumpeterian creative destruction (10) The friendship of Jungle and fire: paradox of public science and market (11) Innovation and revolution: top-down or bottom-up paradox in an Iranian experience (12) Technology instead of civil society: ultimate result of innovation in a closed society. Through explaining these paradoxes we can gradually penetrate the real rationality of Pseudo-innovation ecosystem in a closed society, which can be understood as new-Neopatriarchy reconstruction of traditional patriarchal politics, economy and culture in Iran.Keywords: innovation, critical sociology, modernisation, Iran, closed society
Procedia PDF Downloads 7525689 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System
Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu
Abstract:
Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.Keywords: communication, GEO satellite, data relay system, coverage
Procedia PDF Downloads 44225688 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product
Authors: Tanawat Hongthai, Dusit Thanapatay
Abstract:
This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC
Procedia PDF Downloads 28025687 Application of Pyridine-based Water-soluble Corrosion Inhibitor in Offshore Sweet Oil Pipeline
Authors: M. S. Yalfani, J. Kohzadi, P. Ghadimi, S. Sobhani, M. Ghadimi
Abstract:
The use of oil and water-soluble corrosion inhibitors has been established in Iranian oil and gas production systems for a long time. Imidazoline and its derivatives are being extensively used which are known as conventional corrosion inhibitors. This type of product has shown significant performance and low side effects, so that could monopolize the market of inhibitors in this region. However, the price growth of imidazolines, as well as the development of new lower-cost components with similar or even higher performance than imidazoline, have influenced the exclusive market of imidazoline-based products. During the latest years, pyridine and its derivatives have challenged imidazoline due to their remarkable anticorrosive properties and lower prices as well. Recently, we presented a formulated water-soluble inhibitor based on pyridine - an alkyl pyridine quaternary salt (APQS) - which could successfully pass all lab tests and eventually succeeded in being applied in an offshore sweet oil pipeline. The product was able to achieve high corrosion protection (> 90 %) with the LPR technique at low dosages of 15-25 ppm under severe corrosion conditions. Moreover, the lab test results showed that the APQS molecule is able to form a strong and persistent bond with the metal surface. The product was later nominated to be evaluated through a field trial in an offshore sweet oil pipeline where PH2S < 0.05 psi and CO2 is 6.4 mol%. The three-month trial - extended to six months- resulted in remarkable internal protection obtained by continuous injection of 10 ppm inhibitor, which was as low as 1 mpy measured by both weight loss corrosion coupons and online ER probes. In addition, no side effects, such as tight emulsion and stable foaming, were observed. The residual of the corrosion inhibitor was measured at the end of the pipeline to ensure the full coverage of the inhibitor throughout the pipeline. Eventually, these promising results were able to convince the end user to consider pyridine-based inhibitors as a reliable alternative to imidazoline.Keywords: corrosion inhibitor, pyridine, sweet oil, pipeline, offshore
Procedia PDF Downloads 1225686 Data Hiding by Vector Quantization in Color Image
Authors: Yung Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: data hiding, vector quantization, watermark, color image
Procedia PDF Downloads 36425685 Statistical Discrimination of Blue Ballpoint Pen Inks by Diamond Attenuated Total Reflectance (ATR) FTIR
Authors: Mohamed Izzharif Abdul Halim, Niamh Nic Daeid
Abstract:
Determining the source of pen inks used on a variety of documents is impartial for forensic document examiners. The examination of inks is often performed to differentiate between inks in order to evaluate the authenticity of a document. A ballpoint pen ink consists of synthetic dyes in (acidic and/or basic), pigments (organic and/or inorganic) and a range of additives. Inks of similar color may consist of different composition and are frequently the subjects of forensic examinations. This study emphasizes on blue ballpoint pen inks available in the market because it is reported that approximately 80% of questioned documents analysis involving ballpoint pen ink. Analytical techniques such as thin layer chromatography, high-performance liquid chromatography, UV-vis spectroscopy, luminescence spectroscopy and infrared spectroscopy have been used in the analysis of ink samples. In this study, application of Diamond Attenuated Total Reflectance (ATR) FTIR is straightforward but preferable in forensic science as it offers no sample preparation and minimal analysis time. The data obtained from these techniques were further analyzed using multivariate chemometric methods which enable extraction of more information based on the similarities and differences among samples in a dataset. It was indicated that some pens from the same manufactures can be similar in composition, however, discrete types can be significantly different.Keywords: ATR FTIR, ballpoint, multivariate chemometric, PCA
Procedia PDF Downloads 45725684 Serological Evidence of Enzootic Bovine Leukosis in Dairy Cattle Herds in the United Arab Emirates
Authors: Nabeeha Hassan Abdel Jalil, Lulwa Saeed Al Badi, Mouza Ghafan Alkhyeli, Khaja Mohteshamuddin, Ahmad Al Aiyan, Mohamed Elfatih Hamad, Robert Barigye
Abstract:
The present study was done to elucidate the prevalence of enzootic bovine leucosis (EBL) in the UAE, the seroprevalence rates of EBL in dairy herds from the Al Ain area, Abu Dhabi (AD) and indigenous cattle at the Al Ain livestock market (AALM) were assessed. Of the 949 sera tested by ELISA, 657 were from adult Holstein-Friesians from three farms and 292 from indigenous cattle at the AALM. The level of significance between the proportions of seropositive cattle were analyzed by the Marascuilo procedure and questionnaire data on husbandry and biosecurity practices evaluated. Overall, the aggregated farm and AALM data demonstrated a seroprevalence of 25.9%, compared to 37.0% for the study farms, and 1.0% for the indigenous cattle. Additionally, the seroprevalence rates at farms #1, #2 and #3 were 54.7%, 0.0%, and 26.3% respectively. Except for farm #2 and the AALM, statistically significant differences were noted between the proportions of seropositive cattle for farms #1 and #2 (Critical Range or CR=0.0803), farms #1 and #3 (p=0.1069), and farms #2 and #3 (CR=0.0707), farm #1 and the AALM (CR=0.0819), and farm #3 and the AALM (CR=0.0726). Also, the proportions of seropositive animals on farm #1 were 9.8%, 59.8%, 29.3%, and 1.2% in the 12-36, 37-72, 73-108, and 109-144-mo-old age groups respectively compared to 21.5%, 60.8%, 15.2%, and 2.5% in the respective age groups for farm #2. On both farms and the AALM, the 37-72-mo-old age group showed the highest EBL seroprevalence rate while all the 57 cattle on farm #2 were seronegative. Additionally, farms #1 and #3 had 3,130 and 2,828 intensively managed Holstein-Friesian cattle respectively, and all animals were routinely immunized against several diseases except EBL. On both farms #1 and #3, artificial breeding was practiced using semen sourced from the USA, and USA and Canada respectively, all farms routinely quarantined new stock, and farm #1 previously imported dairy cattle from an unspecified country, and farm #3 from the Netherlands, Australia and South Africa. While farm #1 provided no information on animal nutrition, farm #3 cited using hay, concentrates, and ad lib water. To the authors’ best knowledge, this is the first serological evidence of EBL in the UAE and as previously reported, the seroprevalence rates are comparatively higher in the intensively managed dairy herds than in indigenous cattle. As two of the study farms previously sourced cattle and semen from overseas, biosecurity protocols need to be revisited to avoid inadvertent EBL incursion and the possibility of regional transboundary disease spread also needs to be assessed. After the proposed molecular studies have adduced additional data, the relevant UAE animal health authorities may need to develop evidence-based EBL control policies and programs.Keywords: cattle, enzootic bovine leukosis, seroprevalence, UAE
Procedia PDF Downloads 14425683 Perception of Customers towards Service Quality: A Comparative Analysis of Organized and Unorganised Retail Stores (with Special Reference to Bhopal City)
Authors: Abdul Rashid, Varsha Rokade
Abstract:
Service Quality within retail units is pivotal for satisfying customers and retaining them. This study on customer perception towards Service Quality variables in Retail aims to identify the dimensions and their impact on customers. An analytical study of the different retail service quality variables was done to understand the relationship between them. The study tries exploring the factors that attract the customers towards the organised and unorganised retail stores in the capital city of Madhya Pradesh, India. As organised retailers are seen as offering similar products in the outlets, improving service quality is seen as critical to ensuring competitive advantage over unorganised retailers. Data were collected through a structured questionnaire on a five-point Likert scale from existing walk-in customers of selected organised and unorganised retail stores in Bhopal City of Madhya Pradesh, India. The data was then analysed by factor analysis using (SPSS) Statistical Package for the Social Sciences especially Percentage analysis, ANOVA and Chi-Square. This study tries to find interrelationship between various Retail Service Quality dimensions, which will help the retailers to identify the steps needed to improve the overall quality of service. Thus, the findings of the study prove to be helpful in understanding the service quality variables which should be considered by organised and unorganised retail stores in Capital city of Madhya Pradesh, India.Also, findings of this empirical research reiterate the point of view that dimensions of Service Quality in Retail play an important role in enhancing customer satisfaction – a sector with high growth potential and tremendous opportunities in rapidly growing economies like India’s. With the introduction of FDI in multi-brand retailing, a large number of international retail players are expected to enter the Indian market, this intern will bring more competition in the retail sector. For benchmarking themselves with global standards, the Indian retailers will have to improve their service quality.Keywords: organized retail, unorganised retail, retail service quality, service quality dimension
Procedia PDF Downloads 23025682 Gender Gap in Returns to Social Entrepreneurship
Authors: Saul Estrin, Ute Stephan, Suncica Vujic
Abstract:
Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences
Procedia PDF Downloads 24425681 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector
Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud
Abstract:
The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity
Procedia PDF Downloads 33225680 Identification and Classification of Medicinal Plants of Indian Himalayan Region Using Hyperspectral Remote Sensing and Machine Learning Techniques
Authors: Kishor Chandra Kandpal, Amit Kumar
Abstract:
The Indian Himalaya region harbours approximately 1748 plants of medicinal importance, and as per International Union for Conservation of Nature (IUCN), the 112 plant species among these are threatened and endangered. To ease the pressure on these plants, the government of India is encouraging its in-situ cultivation. The Saussurea costus, Valeriana jatamansi, and Picrorhiza kurroa have also been prioritized for large scale cultivation owing to their market demand, conservation value and medicinal properties. These species are found from 1000 m to 4000 m elevation ranges in the Indian Himalaya. Identification of these plants in the field requires taxonomic skills, which is one of the major bottleneck in the conservation and management of these plants. In recent years, Hyperspectral remote sensing techniques have been precisely used for the discrimination of plant species with the help of their unique spectral signatures. In this background, a spectral library of the above 03 medicinal plants was prepared by collecting the spectral data using a handheld spectroradiometer (325 to 1075 nm) from farmer’s fields of Himachal Pradesh and Uttarakhand states of Indian Himalaya. The Random forest (RF) model was implied on the spectral data for the classification of the medicinal plants. The 80:20 standard split ratio was followed for training and validation of the RF model, which resulted in training accuracy of 84.39 % (kappa coefficient = 0.72) and testing accuracy of 85.29 % (kappa coefficient = 0.77). This RF classifier has identified green (555 to 598 nm), red (605 nm), and near-infrared (725 to 840 nm) wavelength regions suitable for the discrimination of these species. The findings of this study have provided a technique for rapid and onsite identification of the above medicinal plants in the field. This will also be a key input for the classification of hyperspectral remote sensing images for mapping of these species in farmer’s field on a regional scale. This is a pioneer study in the Indian Himalaya region for medicinal plants in which the applicability of hyperspectral remote sensing has been explored.Keywords: himalaya, hyperspectral remote sensing, machine learning; medicinal plants, random forests
Procedia PDF Downloads 20425679 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.Keywords: anomaly detection, autoencoder, data centers, deep learning
Procedia PDF Downloads 19425678 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 31525677 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making
Procedia PDF Downloads 7525676 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: instance selection, data reduction, MapReduce, kNN
Procedia PDF Downloads 25325675 The Impact of Corporate Governance Mechanisms on Dividend Policy
Authors: Tahar Tayachi, Ahlam Alrehaili
Abstract:
Purpose: The purpose of this paper is to investigate the relationship between the corporate board characteristics and the dividend policy among firms on the Saudi Stock Exchange. Design/Methodology/Approach: This paper uses a sample of 103 nonfinancial firms over a time period of 4 years from 2015 to 2018. To investigate how corporate governance mechanisms such as board independence, the board size, frequency of meetings, and free cash flow impact dividends, the study uses Logit and Tobit models. Findings: This paper finds that board size, board independence, and frequency of board meetings have no influence on a firm’s decision to pay dividends, while board size has a significantly positive impact on the levels of cash dividends paid to investors. This study also finds that the level of free cash flows has a positively significant influence on both the decision to pay dividends and the magnitude of dividend payouts. Research Limitations/Implications: This paper attempts to study the effectiveness of dividend policy among some firms on the Saudi Stock Exchange. Practical Implications: The findings reveal that board characteristics, which represent one of the crucial mechanisms of corporate governance, were found to be complementary to corporate laws and regulations imposed on the Saudi market in 2015. The findings also imply that capital market authorities should revise their corporate regulations and ensure that protection laws are adequate and strong enough to protect the interests of all shareholders. Originality/Value: This paper is among the few studies focusing on dividend policy in Saudi Arabia. Finally, these findings suggest that the improvements in corporate laws in Saudi Arabia led to such an outcome, and it has become prevalent in dividend policy decisions and behaviors of Saudi firms.Keywords: agency theory, Tobit, corporate governance, dividend payout, Logit
Procedia PDF Downloads 20425674 The Analysis of the Challenge China’s Energy Transition Faces and Proposed Solutions
Authors: Yuhang Wang
Abstract:
As energy is vital to industrial productivity and human existence, ensuring energy security becomes a critical government responsibility. The Chinese government has implemented the energy transition to safeguard China’s energy security. Throughout this progression, the Chinese government has faced numerous obstacles. This article seeks to describe the causes of China’s energy transition barriers and the steps taken by the Chinese government to overcome them.Keywords: energy transition, energy market, fragmentation, path dependency
Procedia PDF Downloads 10225673 Experimental Evaluation of Succinct Ternary Tree
Authors: Dmitriy Kuptsov
Abstract:
Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation
Procedia PDF Downloads 16025672 Social Entrepreneurship and Inclusive Growth
Authors: Sudheer Gupta
Abstract:
Approximately 4 billion citizens of the world live on the equivalent of less than $8 a day. This segment constitutes a $5 trillion global market that remains under-served. Multinational corporations have historically tended to focus their innovation efforts on the upper segments of the economic pyramid. The academic literature has also been dominated by theories and frameworks of innovation that are valid when applied to the developed markets and consumer segments, but fail to adequately account for the challenges and realities of new product and service creation for the poor. Theories of entrepreneurship developed in the context of developed markets similarly ignore the challenges and realities of operating in developing economies that can be characterized by missing institutions, missing markets, information and infrastructural challenges, and resource constraints. Social entrepreneurs working in such contexts develop solutions differently. In this talk, we summarize lessons learnt from a long-term research project that involves data collection from a broad range of social entrepreneurs in developing countries working towards solutions to alleviate poverty, and grounded theory-building efforts. We aim to develop a better understanding of consumers, producers, and other stakeholder involvement, thus laying the foundation to build a robust theory of innovation and entrepreneurship for the poor.Keywords: poverty alleviation, social enterprise, social innovation, development
Procedia PDF Downloads 39925671 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 10825670 Investigate the Current Performance of Burger King Ho Chi Minh City in Terms of the Controllable Variables of the Overall Retail Strategy
Authors: Nhi Ngoc Thien
Abstract:
Franchising is a popular trend in Vietnam retail industry, especially in fast food industry. Several famous foreign fast food brands such as KFC, Lotteria, Jollibee or Pizza Hut invested on this potential market since the 1990s. Following this trend, in 2011, Burger King - the second largest fast food hamburger chain all over the world - entered Vietnam with its first store located in Tan Son Nhat International Airport, with the expectation to become the leading brand in the country. However, the business performance of Burger King was not going well in the first few years making it questioned about its strategy. The given assumption was that its business performance was affected negatively by its store location selection strategy. This research aims to investigate the current performance of Burger King Vietnam in terms of the controllable variables like store location as well as to explore the key factors influencing customer decision to choose Burger King. Therefore, a case study research method was conducted to approach deeply on the opinions and evaluations of 10 Burger King’s customers, Burger King's staffs and other fast food experts on Burger King’s performance through in-depth interview, direct observation and documentary analysis. Findings show that there are 8 determinants affecting the decision-making of Burger King’s customers, which are store location, quality of food, service quality, store atmosphere, price, promotion, menu and brand reputation. Moreover, findings present that Burger King’s staffs and fast food experts also mentioned the main problems of Burger King, which are about store location and food quality. As a result, there are some recommendations for Burger King Vietnam to improve its performance in the market and attract more Vietnamese target customers by giving suitable promotional activities among its customers and being differentiated itself from other fast food brands.Keywords: overall retail strategy, controllable variables, store location, quality of food
Procedia PDF Downloads 34525669 The Use of Psychological Tests in Polish Organizations - Empirical Evidence
Authors: Milena Gojny-Zbierowska
Abstract:
In the last decades psychological tests have been gaining in popularity as a method used for evaluating personnel, and they bring consulting companies solid profits rising by up to 10% each year. The market is offering a growing range of tools for the assessment of personality. Tests are used in organizations mainly in the recruitment and selection of staff. This paper is an attempt to initially diagnose the state of the use of psychological tests in Polish companies on the basis of empirical research.Keywords: psychological tests, personality, content analysis, NEO FFI, big five personality model
Procedia PDF Downloads 36525668 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 42325667 Machine Learning Data Architecture
Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap
Abstract:
Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning
Procedia PDF Downloads 6425666 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: colour data, local stereo matching, stereo correspondence, disparity map
Procedia PDF Downloads 37025665 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors
Authors: Katawut Kaewbanjong
Abstract:
We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.Keywords: prediction model, statistical analysis, software project, user satisfaction factor
Procedia PDF Downloads 12425664 Do Interventions for Increasing Minorities' Access to Higher Education Work? The Case of Ethiopians in Israel
Authors: F. Nasser-Abu Alhija
Abstract:
In many countries, much efforts and resources are devoted to empowering and integrating minorities within the mainstream population. Major ventures in this route are crafted in higher education institutions where different outreach programs and methods such as lenient entry requirements, monitory incentives, learning skills workshops, tutoring and mentoring, are utilized. Although there is some information regarding these programs, their effectiveness still needs to be thoroughly examined. The Ethiopian community In Israel is one of the minority groups that has been targeted by sponsoring foundations and higher education institutions with the aim to ease the access, persistence and success of its young people in higher education and later in the job market. The evaluation study we propose to present focuses on the implementation of a program designed for this purpose. This program offers relevant candidates for study at a prestigious university a variety of generous incentives that include tuitions, livening allowance, tutoring, mentoring, skills and empowerment workshops and cultural meetings. Ten students were selected for the program and they started their studies in different subject areas before three and half years. A longitudinal evaluation has been conducted since the implementation of the program. Data were collected from different sources: participating students, program coordinator, mentors, tutors, program documents and university records. Questionnaires and interviews were used for collecting data on the different components of the program and on participants' perception of their effectiveness. Participants indicate that the lenient entry requirements and the monitory incentives are critical for starting their studies. During the first year, skills and empowering workshops, torturing and mentoring were evaluated as very important for persistence and success in studies. Tutoring was perceived as very important also at the second year but less importance is attributed to mentoring. Mixed results regarding integration in the Israeli culture emerged. The results are discussed with reference to findings from different settings around the world.Keywords: access to higher education, minority groups, monitory incentives, torturing, mentoring
Procedia PDF Downloads 37325663 Exploring Well-Being: Lived Experiences and Assertions From a Marginalized Perspective
Authors: Ritwik Saha, Anindita Chaudhuri
Abstract:
The psychological dimension of work-based mobility of the contemporary time in the context of the ever-changing socio-economic process mounting the interest to address the consequential issues of quality of life and well-being of the migrant section of society. The negotiation with the fluidity of the job market and the changing psychosocial dimensions within and between psychosocial relations may disentangle the resilience as well as the mechanism of diligence toward migrant (marginal) life. The work-based mobility and its associated phenomena have highly impacted the migrant’s quality of life especially the marginalized (socioeconomically weak) ones along with their family members staying away from them. The subjective experiences of the journey of their migrant life and reconstruction of the psychosocial being in terms of existence and well-being at the host place are the minimal addressed issues in migrant literature. Hence this gap instigates to bring forth the issue with the present study exploring the phenomenal aspects of lived experiences, resilience, and sense-making of the well-being of migrant living by the marginalized migrant people engaging in unorganized space. In doing so qualitative research method was followed, and semi-structured interviews were used for data collection from the four selected migrant groups (Fuchkawala, Bhunjawala, Bhari - drinking water supplier, Construction worker) as they migrated to Kolkata and its metropolis area from different states of India, Five participants from each group (20 participants in total) age range between 20 to 45 were interviewed physically and participants’ observatory notes were taken to capture their lived experiences, audio recordings were transcribed and analyzed systematically following Charmaz’s three-layer coding of grounded theory. Being truthful to daily industry, the strong desire to build children’s future, the mastering mechanism to dual existence, use of traditional social network these four themes emerges after analysis of the data. However, incorporating fate as their usual way of life and making sense of well-being through their assertion is another evolving aspect of migrant life.Keywords: lived experiences, marginal living, resilience, sense-making process, well-being
Procedia PDF Downloads 6125662 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System
Authors: Karima Qayumi, Alex Norta
Abstract:
The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)
Procedia PDF Downloads 43225661 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design
Authors: Qing K. Zhu
Abstract:
Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise
Procedia PDF Downloads 254