Search results for: business data processing
27898 A Study of the Effects of Zimbabwean Youth Migration on Musina Area, South Africa
Authors: R. Chinyakata, N. R. Raselekoane
Abstract:
Migration has always been part of human history. Migration is spurred by globalisation which connects nations by encouraging the flow of goods, services, ideas and people across borders. Migration does not only involve movement of adults from one country to another. It also affects and involves the youth as they are the most mobile group. Musina area, like many other border areas, experiences a variety of challenges as a result of the influx of people from the neighbouring Zimbabwe and other African countries. Of great concern about this migration is the fact that the host country or area may become unsafe and unstable as a result of huge influx of migrants. There may also be tensions between local people and migrants over the resources. The study sought to investigate the effects of the Zimbabwean youth migration on Musina area. The study was undertaken in Musina area which is situated 18km from the Beit-Bridge border post. A qualitative research approach was used. Semi-structured interviews were used to collect data. Non-probability quota sampling technique was used to select the respondents. The study sample consisted of sixteen female and male respondents. Thematic coding was used to analyse the data. Ethical considerations such as informed consent, confidentiality, anonymity and voluntary participation were taken into account to protect the participants. The study found that the effects of the Zimbabwean youth migration on the Musina area include, among others, tensions between locals and the Zimbabwean youth migrants over resources, job and business opportunities, overcrowding and crime. Multi-pronged strategies which involve different stakeholders should be applied to address tensions over job and business opportunities, overcrowding and crime in the Musina area.Keywords: host country, effects, migrant, migration, Musina, youth, Zimbabwe
Procedia PDF Downloads 24427897 Consumer Protection Law For Users Mobile Commerce as a Global Effort to Improve Business in Indonesia
Authors: Rina Arum Prastyanti
Abstract:
Information technology has changed the ways of transacting and enabling new opportunities in business transactions. Problems to be faced by consumers M Commerce, among others, the consumer will have difficulty accessing the full information about the products on offer and the forms of transactions given the small screen and limited storage capacity, the need to protect children from various forms of excess supply and usage as well as errors in access and disseminate personal data, not to mention the more complex problems as well as problems agreements, dispute resolution that can protect consumers and assurance of security of personal data. It is no less important is the risk of payment and personal information of payment dal am also an important issue that should be on the swatch solution. The purpose of this study is 1) to describe the phenomenon of the use of Mobile Commerce in Indonesia. 2) To determine the form of legal protection for the consumer use of Mobile Commerce. 3) To get the right type of law so as to provide legal protection for consumers Mobile Commerce users. This research is a descriptive qualitative research. Primary and secondary data sources. This research is a normative law. Engineering conducted engineering research library collection or library research. The analysis technique used is deductive analysis techniques. Growing mobile technology and more affordable prices as well as low rates of provider competition also affects the increasing number of mobile users, Indonesia is placed into 4 HP users in the world, the number of mobile phones in Indonesia is estimated at around 250.1 million telephones with a population of 237 556. 363. Indonesian form of legal protection in the use of mobile commerce still a part of the Law No. 11 of 2008 on Information and Electronic Transactions and until now there is no rule of law that specifically regulates mobile commerce. Legal protection model that can be applied to protect consumers of mobile commerce users ensuring that consumers get information about potential security and privacy challenges they may face in m commerce and measures that can be used to limit the risk. Encourage the development of security measures and built security features. To encourage mobile operators to implement data security policies and measures to prevent unauthorized transactions. Provide appropriate methods both time and effectiveness of redress when consumers suffer financial loss.Keywords: mobile commerce, legal protection, consumer, effectiveness
Procedia PDF Downloads 36427896 Mutual Information Based Image Registration of Satellite Images Using PSO-GA Hybrid Algorithm
Authors: Dipti Patra, Guguloth Uma, Smita Pradhan
Abstract:
Registration is a fundamental task in image processing. It is used to transform different sets of data into one coordinate system, where data are acquired from different times, different viewing angles, and/or different sensors. The registration geometrically aligns two images (the reference and target images). Registration techniques are used in satellite images and it is important in order to be able to compare or integrate the data obtained from these different measurements. In this work, mutual information is considered as a similarity metric for registration of satellite images. The transformation is assumed to be a rigid transformation. An attempt has been made here to optimize the transformation function. The proposed image registration technique hybrid PSO-GA incorporates the notion of Particle Swarm Optimization and Genetic Algorithm and is used for finding the best optimum values of transformation parameters. The performance comparision obtained with the experiments on satellite images found that the proposed hybrid PSO-GA algorithm outperforms the other algorithms in terms of mutual information and registration accuracy.Keywords: image registration, genetic algorithm, particle swarm optimization, hybrid PSO-GA algorithm and mutual information
Procedia PDF Downloads 40827895 A Web Service Based Sensor Data Management System
Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh
Abstract:
The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor
Procedia PDF Downloads 21227894 Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach
Authors: Adrian O'Hagan, Robert McLoughlin
Abstract:
Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.Keywords: empirical copula, extreme events, insurance loss reserving, upper tail dependence coefficient
Procedia PDF Downloads 28427893 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 55527892 Satellite Statistical Data Approach for Upwelling Identification and Prediction in South of East Java and Bali Sea
Authors: Hary Aprianto Wijaya Siahaan, Bayu Edo Pratama
Abstract:
Sea fishery's potential to become one of the nation's assets which very contributed to Indonesia's economy. This fishery potential not in spite of the availability of the chlorophyll in the territorial waters of Indonesia. The research was conducted using three methods, namely: statistics, comparative and analytical. The data used include MODIS sea temperature data imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, MODIS data of chlorophyll-a imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, and Imaging results data ASCAT on MetOp and NOAA satellites with 27 km resolution in 2002-2015. The results of the processing of the data show that the incidence of upwelling in the south of East Java Sea began to happen in June identified with sea surface temperature anomaly below normal, the mass of the air that moves from the East to the West, and chlorophyll-a concentrations are high. In July the region upwelling events are increasingly expanding towards the West and reached its peak in August. Chlorophyll-a concentration prediction using multiple linear regression equations demonstrate excellent results to chlorophyll-a concentrations prediction in 2002 until 2015 with the correlation of predicted chlorophyll-a concentration indicate a value of 0.8 and 0.3 with RMSE value. On the chlorophyll-a concentration prediction in 2016 indicate good results despite a decline in the value of the correlation, where the correlation of predicted chlorophyll-a concentration in the year 2016 indicate a value 0.6, but showed improvement in RMSE values with 0.2.Keywords: satellite, sea surface temperature, upwelling, wind stress
Procedia PDF Downloads 15827891 Collaborative Technology Implementation Success and Knowledge Capacity: Case of Tunisian Banks with Mixed Capital
Authors: Amira Khelil, Habib Affes
Abstract:
Organization resource planning implementation success is important. Today`s competitors in business, in enterprise resource planning and in managing are becoming one of the main tools of achieving competitiveness in business. Resource technologies are considered as an infrastructure to create and maintain business to improve front and back-office efficiency and effectiveness. This study is significant to bring new ideas in determining the key antecedents which are technological resource planning implementation based on knowledge capacity perspectives and help to understand the key success factor in the Tunisian banks. Based on a survey of 150 front office Tunisian agents working in Tunisian banks with mixed capital, using Groupware system, only 51 respondents had given feedback to this survey. By using Warp PLS 3.0, through several tests the relationship between knowledge capability and Groupware implementation success having beta coefficient 0.37 and P-Value <0.01. This result highlights that knowledge capability of bank agent can influence the success of the Groupware implementation.Keywords: groupware implementation, knowledge capacity, partial least squares method, Tunisian banks
Procedia PDF Downloads 48927890 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training
Authors: Carin Chuang, Kuan-Chou Chen
Abstract:
An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation
Procedia PDF Downloads 13927889 R Software for Parameter Estimation of Spatio-Temporal Model
Authors: Budi Nurani Ruchjana, Atje Setiawan Abdullah, I. Gede Nyoman Mindra Jaya, Eddy Hermawan
Abstract:
In this paper, we propose the application package to estimate parameters of spatiotemporal model based on the multivariate time series analysis using the R open-source software. We build packages mainly to estimate the parameters of the Generalized Space Time Autoregressive (GSTAR) model. GSTAR is a combination of time series and spatial models that have parameters vary per location. We use the method of Ordinary Least Squares (OLS) and use the Mean Average Percentage Error (MAPE) to fit the model to spatiotemporal real phenomenon. For case study, we use oil production data from volcanic layer at Jatibarang Indonesia or climate data such as rainfall in Indonesia. Software R is very user-friendly and it is making calculation easier, processing the data is accurate and faster. Limitations R script for the estimation of model parameters spatiotemporal GSTAR built is still limited to a stationary time series model. Therefore, the R program under windows can be developed either for theoretical studies and application.Keywords: GSTAR Model, MAPE, OLS method, oil production, R software
Procedia PDF Downloads 24227888 Factors of Self-Sustainability in Social Entrepreneurship: Case Studies of ACT Group Čakovec and Friskis and Svettis Stockholm
Authors: Filip Majetić, Dražen Šimleša, Jelena Puđak, Anita Bušljeta Tonković, Svitlana Pinchuk
Abstract:
This paper focuses on the self-sustainability aspect of social entrepreneurship (SE). We define SE as a form of entrepreneurship that is social/ecological mission oriented. It means SE organizations start and run businesses and use them to accomplish their social/ecological missions i.e. to solve social/ecological problems or fulfill social/ecological needs. Self-sustainability is defined as the capability of an SE organization to operate by relying on the money earned through trading its products in the free market. For various reasons, the achievement of self-sustainability represents a fundamental (business) challenge for many SE organizations. Those that are not able to operate using the money made through commercial activities, in order to remain active, rely on alternative, non-commercial streams of income such as grants, donations, and public subsidies. Starting from this widespread (business) challenge, we are interested in exploring elements that (could) influence the self-sustainability in SE organizations. Therefore, the research goal is to empirically investigate some of the self-sustainability factors of two notable SE organizations from different socio-economic contexts. A qualitative research, using the multiple case study approach, was conducted. ACT Group Čakovec (ACT) from Croatia was selected for the first case because it represents one of the leading and most self-sustainable SE organization in the region (in 2015 55% of the organization’s budget came from commercial activities); Friskis&Svettis Stockholm (F&S) from Sweden was selected for the second case because it is a rare example of completely self-sustainable SE organization in Europe (100% of the organization’s budget comes from commercial activities). The data collection primarily consists of conducting in-depth interviews. Additionally, the content of some of the organizations' official materials are analyzed (e.g. business reports, marketing materials). The interviewees are selected purposively and include: six highly ranked F&S members who represent five different levels in the hierarchy of their organization; five highly ranked ACT members who represent three different levels in the hierarchy of the organization. All of the interviews contain five themes: a) social values of the organization, b) organization of work, c) non-commercial income sources, d) marketing/collaborations, and e) familiarity with the industry characteristics and trends. The gathered data is thematically analyzed through the coding process for which Atlas.ti software for qualitative data analysis is used. For the purpose of creating thematic categories (codes), the open coding is used. The research results intend to provide new theoretical insights on factors of SE self-sustainability and, preferably, encourage practical improvements in the field.Keywords: Friskis&Svettis, self-sustainability factors, social entrepreneurship, Stockholm
Procedia PDF Downloads 21827887 Navigating Creditors' Interests in the Context of Business Rescue
Authors: Hermanus J. Moolman
Abstract:
The COVID-19 pandemic had a severe impact on the society and companies in South Africa. This raises questions about the position of creditors of companies facing financial distress and the actions that directors should take to cater to the interests of creditors. The extent to which directors owe their duties and consideration to creditors has been the subject of debate. The directors of a solvent company owe their duties to the company in favour of its shareholders. When the company becomes insolvent, creditors are the beneficiaries of the directors’ duties. However, the intermittent phase between solvency and insolvency, otherwise referred to as the realm of insolvency, is not accounted for. The purpose of this paper is to determine whether South African company law appropriately addresses the duties that directors owe to creditors and the extent of consideration given to creditors’ interests when the company is in the realm of insolvency and has started business rescue proceedings. A comparative study on South Africa, the United States of America, the United Kingdom and international instruments was employed to achieve the purpose statement. In the United States of America and the United Kingdom, the focus shifts from shareholders to the best interests of creditors when business recue proceedings commence. Such an approach is not aligned with the purpose of the Companies Act of 2008 that calls for a balance of interests of all persons affected by a company’s financial distress and will not be suitable for the South African context. Business rescue in South Africa is relatively new when compared to the practices of the United States of America and the United Kingdom, and the entrepreneurial landscape in South Africa is still evolving. The interests of creditors are not the only interests at risk when a company is financially distressed. It is recommended that an enlightened creditor value approach is adopted for South Africa, where the interests of creditors, albeit paramount, are balanced with those of other stakeholders. This approach optimises a gradual shift in the duties of directors from shareholders to creditors, without disregarding the interests of shareholders.Keywords: business rescue, shareholders, creditors, financial distress, balance of interests, alternative remedies, company law
Procedia PDF Downloads 4427886 Systematic Literature Review of Therapeutic Use of Autonomous Sensory Meridian Response (ASMR) and Short-Term ASMR Auditory Training Trial
Authors: Christine H. Cubelo
Abstract:
This study consists of 2-parts: a systematic review of current publications on the therapeutic use of autonomous sensory meridian response (ASMR) and a within-subjects auditory training trial using ASMR videos. The main intent is to explore ASMR as potentially therapeutically beneficial for those with atypical sensory processing. Many hearing-related disorders and mood or anxiety symptoms overlap with symptoms of sensory processing issues. For this reason, inclusion and exclusion criteria of the systematic review were generated in an effort to produce optimal search outcomes and avoid overly confined criteria that would limit yielded results. Criteria for inclusion in the review for Part 1 are (1) adult participants diagnosed with hearing loss or atypical sensory processing, (2) inclusion of measures related to ASMR as a treatment method, and (3) published between 2000 and 2022. A total of 1,088 publications were found in the preliminary search, and a total of 13 articles met the inclusion criteria. A total of 14 participants completed the trial and post-trial questionnaire. Of all responses, 64.29% agreed that the duration of auditory training sessions was reasonable. In addition, 71.43% agreed that the training improved their perception of music. Lastly, 64.29% agreed that the training improved their perception of a primary talker when there are other talkers or background noises present.Keywords: autonomous sensory meridian response, auditory training, atypical sensory processing, hearing loss, hearing aids
Procedia PDF Downloads 5527885 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.Keywords: bioassay, machine learning, preprocessing, virtual screen
Procedia PDF Downloads 27427884 Cloud Computing Impact on e-Government Adoption
Authors: Ali Elshabrawy
Abstract:
Cloud computing is expected to be important for e Government in near future. Governments need it for solving some of its e Government, financial, infrastructure, legacy systems and integration problems. It reduces information technology (IT) infrastructure needs and support costs, and offers on-demand infrastructure and computational power, improved collaboration capabilities, which are important for e Government projects start up and sustainability. Budget pressures will continue to drive more and more government IT to hybrid and even public clouds, and more cooperation between cloud service providers and governmental agencies are expected, Or developing governmental private, community clouds. Motivation to convince governments to use cloud computing services, will create a pressure on cloud service providers to cope with government's requirements for interoperability, security standards, open data and integration between their cloud systems There will be significant legal action arising out of governmental uses of cloud computing, and legislation addressing both IT and business needs and consumer fears and protections. Cloud computing is a considered a revolution for IT and E business in general and e commerce, e Government in particular. As governments faces increasing challenges regarding IT infrastructure required for e Government projects implementation. As a result of Lack of required financial resources allocated for e Government projects in developed and developing countries. Cloud computing can play a major role to solve some of e Government projects challenges such as, lack of financial resources, IT infrastructure, Human resources trained to manage e Government applications, interoperability, cost efficiency challenges. If we could solve some security issues related to cloud computing usage which considered critical for e Government projects. Pretty sure it’s Just a matter of time before cloud service providers will find out solutions to attract governments as major customers for their business.Keywords: cloud computing, e-government, adoption, supply side barriers, e-government requirements, challenges
Procedia PDF Downloads 34627883 Robustness of MIMO-OFDM Schemes for Future Digital TV to Carrier Frequency Offset
Authors: D. Sankara Reddy, T. Kranthi Kumar, K. Sreevani
Abstract:
This paper investigates the impact of carrier frequency offset (CFO) on the performance of different MIMO-OFDM schemes with high spectral efficiency for next generation of terrestrial digital TV. We show that all studied MIMO-OFDM schemes are sensitive to CFO when it is greater than 1% of intercarrier spacing. We show also that the Alamouti scheme is the most sensitive MIMO scheme to CFO.Keywords: modulation and multiplexing (MIMO-OFDM), signal processing for transmission carrier frequency offset, future digital TV, imaging and signal processing
Procedia PDF Downloads 48727882 Iris Cancer Detection System Using Image Processing and Neural Classifier
Authors: Abdulkader Helwan
Abstract:
Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera
Procedia PDF Downloads 50327881 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption
Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský
Abstract:
Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.Keywords: hazardous waste, oil sludge, remediation, thermal desorption
Procedia PDF Downloads 20027880 AI Software Algorithms for Drivers Monitoring within Vehicles Traffic - SiaMOTO
Authors: Ioan Corneliu Salisteanu, Valentin Dogaru Ulieru, Mihaita Nicolae Ardeleanu, Alin Pohoata, Bogdan Salisteanu, Stefan Broscareanu
Abstract:
Creating a personalized statistic for an individual within the population using IT systems, based on the searches and intercepted spheres of interest they manifest, is just one 'atom' of the artificial intelligence analysis network. However, having the ability to generate statistics based on individual data intercepted from large demographic areas leads to reasoning like that issued by a human mind with global strategic ambitions. The DiaMOTO device is a technical sensory system that allows the interception of car events caused by a driver, positioning them in time and space. The device's connection to the vehicle allows the creation of a source of data whose analysis can create psychological, behavioural profiles of the drivers involved. The SiaMOTO system collects data from many vehicles equipped with DiaMOTO, driven by many different drivers with a unique fingerprint in their approach to driving. In this paper, we aimed to explain the software infrastructure of the SiaMOTO system, a system designed to monitor and improve driver driving behaviour, as well as the criteria and algorithms underlying the intelligent analysis process.Keywords: artificial intelligence, data processing, driver behaviour, driver monitoring, SiaMOTO
Procedia PDF Downloads 9127879 Analysis of Translational Ship Oscillations in a Realistic Environment
Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting
Abstract:
To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation
Procedia PDF Downloads 52327878 Lock in, Lock Out: A Double Lens Analysis of Local Media Paywall Strategies and User Response
Authors: Mona Solvoll, Ragnhild Kr. Olsen
Abstract:
Background and significance of the study: Newspapers are going through radical changes with increased competition, eroding readerships and declining advertising resulting in plummeting overall revenues. This has lead to a quest for new business models, focusing on monetizing content. This research paper investigates both how local online newspapers have introduced user payment and how the audience has received these changes. Given the role of local media in keeping their communities informed and those in power accountable, their potential impact on civic engagement and cultural integration in local communities, the business model innovations of local media deserves far more research interest. Empirically, the findings are interesting for local journalists, local media managers as well as local advertisers. Basic methodologies: The study is based on interviews with commercial leaders in 20 Norwegian local newspapers in addition to a national survey data from 1600 respondents among local media users. The interviews were conducted in the second half of 2015, while the survey was conducted in September 2016. Theoretically, the study draws on the business model framework. Findings: The analysis indicates that paywalls aim more at reducing digital cannibalisation of print revenue than about creating new digital income. The newspapers are mostly concerned with retaining “old” print subscribers and transform them into digital subscribers. However, this strategy may come at a high price for newspapers if their defensive print strategy drives away younger digital readership and hamper their recruitment potential for new audiences as some previous studies have indicated. Analysis of young reader news habits indicates that attracting the younger audience to traditional local news providers is particularly challenging and that they are more prone to seek alternative news sources than the older audience is. Conclusion: The paywall strategy applied by the local newspapers may be well fitted to stabilise print subscription figures and facilitate more tailored and better services for already existing customers, but far less suited for attracting new ones. The paywall is a short-sighted strategy, which drives away younger readers and paves the road for substitute offerings, particularly Facebook.Keywords: business model, newspapers, paywall, user payment
Procedia PDF Downloads 27727877 Image Segmentation Techniques: Review
Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo
Abstract:
Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.Keywords: clustering-based, convolution-network, edge-based, region-growing
Procedia PDF Downloads 9627876 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 1627875 Assessing of Social Comfort of the Russian Population with Big Data
Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro
Abstract:
The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.Keywords: big data, Google trends, integral indicator, social comfort
Procedia PDF Downloads 20027874 The Effectiveness of Cash Flow Management by SMEs in the Mafikeng Local Municipality of South Africa
Authors: Ateba Benedict Belobo, Faan Pelser, Ambe Marcus
Abstract:
Aims: This study arise from repeated complaints from both electronic mails about the underperformance of Mafikeng Small and Medium-Size enterprises after the global financial crisis. The authors were on the view that, this poor performance experienced could be as a result of the negative effects on the cash flow of these businesses due to volatilities in the business environment in general prior to the global crisis. Thus, the paper was mainly aimed at determining the shortcomings experienced by these SMEs with regards to cash flow management. It was also aimed at suggesting possible measures to improve cash flow management of these SMEs in this tough time. Methods: A case study was conducted on 3 beverage suppliers, 27 bottle stores, 3 largest fast consumer goods super markets and 7 automobiles enterprises in the Mafikeng local municipality. A mixed method research design was employed and a purposive sampling was used in selecting SMEs that participated. Views and experiences of participants of the paper were captured through in-depth interviews. Data from the empirical investigation were interpreted using open coding and a simple percentage formula. Results: Findings from the empirical research reflected that majority of Mafikeng SMEs suffer poor operational performance prior to the global financial crisis primarily as a result of poor cash flow management. However, the empirical outcome also indicted other secondary factors contributing to this poor operational performance. Conclusion: Finally, the authorsproposed possible measures that could be used to improve cash flow management and to solve other factors affecting operational performance of SMEs in the Mafikeng local municipality in other to achieve a better business performance.Keywords: cash flow, business performance, global financial crisis, SMEs
Procedia PDF Downloads 43927873 Strategic Business Solutions for an Ageing SME
Authors: N. G. Teik Hiang, Fathyah Hashim
Abstract:
This is a case of how strategic management techniques can be used to help resolving problems faced by an ageing Small and Medium Enterprise (SME). Strategic way of resolving problems had been proven to be possible in this case despite general thought that strategic management is useful mostly for large corporations. Small and Medium Enterprises (SMEs) can also use strategic management in managing their business and determining their future cause of action and strategies in order to survive in this ever competent world. Strategic orientation is the key to survival and development of small and medium enterprises. In order to adapt to the fierce market competition, ageing SMEs should improve competitiveness and operational efficiency. They must therefore establish a sense of strategic management to improve the strategic management skills, combined with its own unique characteristics, and work out practical strategies to develop core competitiveness of enterprises in the fierce market competition in order to be sustainable. In this case, internal strengths and weaknesses of an SME had been identified. Strategic internal factors and external factors had been classified and further utilized to formulate potential strategies to encounter various problems faced by the SME. These strategies had been further match to take advantages of the opportunities and to overcome the weaknesses and minimize the threats it is facing. Tan, a consultant who was given the opportunity to formulate a plan for the business started with the environmental scanning (internal and external environmental analysis), assessing strengths and weaknesses for the company, strategies generation, analysis and evaluation. He had numerous discussions with the owner of the business and the senior management in order to match the key internal and external factors to formulate alternative strategies for solving the problems that the company facing. Some of the recommendations or solutions are generated from the inspiration of the owner of the business who is a very enterprising and experience businessman.Keywords: strategic orientation, strategic management, SME, core competitiveness, sustainable
Procedia PDF Downloads 41927872 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 13327871 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 42327870 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 24827869 Business Process Management and Organizational Culture in Big Companies: Cross-Country Analysis
Authors: Dalia Suša Vugec
Abstract:
Business process management (BPM) is widely used approach focused on designing, mapping, changing, managing and analyzing business processes of an organization, which eventually leads to better performance and derives many other benefits. Since every organization strives to improve its performance in order to be sustainable and to remain competitive on the market in long-term period, numerous organizations are nowadays adopting and implementing BPM. However, not all organizations are equally successful in that. One of the ways of measuring BPM success is by measuring its maturity by calculating Process Performance Index (PPI) using ten BPM success factors. Still, although BPM is a holistic concept, organizational culture is not taken into consideration in calculating PPI. Hence, aim of this paper is twofold; first, it aims to explore and analyze the current state of BPM success factors within the big organizations from Slovenia, Croatia, and Austria and second, it aims to analyze the structure of organizational culture within the observed companies, focusing on the link with BPM success factors as well. The presented study is based on the results of the questionnaire conducted as the part of the PROSPER project (IP-2014-09-3729) and financed by Croatian Science Foundation. The results of the questionnaire reveal differences in the achieved levels of BPM success factors and therefore BPM maturity in total between the three observed countries. Moreover, the structure of organizational culture across three countries also differs. This paper discusses the revealed differences between countries as well as the link between organizational culture and BPM success factors.Keywords: business process management, BPM maturity, BPM success factors, organizational culture, process performance index
Procedia PDF Downloads 119