Search results for: open source data
27183 A New Source on Ottoman Self-Narratives: Kulakzade Mahmud Pasha’s Dream Diary
Authors: Semra Çörekçi̇
Abstract:
In this study, a new source on Ottoman Self-narratives, Kulakzâde Mahmud Paşa’s Düşname (Dreambook), will be introduced to illustrate how dreams can provide a ground for historical analysis. The manuscript looks like a private notebook of an Ottoman official, Mahmud Pasha, who lived and operated in Rumelia in the early eighteenth century. It provides insight into the ordinary and daily concerns of a bureaucrat who had the knowledge and tools to record them in writing. On the one side of the notebook, Mahmud Pasha recorded his travels and appointments in 1730-1731. He wrote places that he reached and stayed every day. On the reverse side, the same author kept a record of his dreams and named that part of his notebook, Düşname. He recorded his dreams on a daily basis in writing and therefore they were well-preserved in a dream diary. This study aims at drawing the social, cultural and psychic life of an early modern Ottoman bureaucrat. It will uncover the ways and means whereby he interpreted his environment, as well as how he made meaning of his dreams considering the social milieu and historical context within which he lived. The first part will focus on 'official dreams' uncovering how his official life and ambitions coincide with his spiritual life. Related to this, connection between anxiety and dream narratives will be evaluated as dreams in which the mundane concerns of securing a post occupied the most central place in the construction of his narrative. A further point will be made by questioning Mahmud Pasha’s possible Sufi connections and his familiarity with the tradition of dream interpretation. Also, considering Mahmud Pasha’s inclusion of other’s dreams in his Düşnâme, the issue of dream-telling will be questioned in order to reveal how dreams were interconnected and how they created a space for social gathering.Keywords: Ottoman self-narratives, dreams, diary, Ottoman cultural history
Procedia PDF Downloads 25027182 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology
Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar
Abstract:
Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.Keywords: data privacy, distributed system, federated learning, machine learning
Procedia PDF Downloads 13327181 A Concept of Data Mining with XML Document
Authors: Akshay Agrawal, Anand K. Srivastava
Abstract:
The increasing amount of XML datasets available to casual users increases the necessity of investigating techniques to extract knowledge from these data. Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi-structured datasets. The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways.Keywords: XML, similarity measure, clustering, cluster quality, semantic clustering
Procedia PDF Downloads 38127180 Determinants of Rural Household Effective Demand for Biogas Technology in Southern Ethiopia
Authors: Mesfin Nigussie
Abstract:
The objectives of the study were to identify factors affecting rural households’ willingness to install biogas plant and amount willingness to pay in order to examine determinants of effective demand for biogas technology. A multistage sampling technique was employed to select 120 respondents for the study. The binary probit regression model was employed to identify factors affecting rural households’ decision to install biogas technology. The probit model result revealed that household size, total household income, access to extension services related to biogas, access to credit service, proximity to water sources, perception of households about the quality of biogas, perception index about attributes of biogas, perception of households about installation cost of biogas and availability of energy source were statistically significant in determining household’s decision to install biogas. Tobit model was employed to examine determinants of rural household’s amount of willingness to pay. Based on the model result, age of the household head, total annual income of the household, access to extension service and availability of other energy source were significant variables that influence willingness to pay. Providing due considerations for extension services, availability of credit or subsidy, improving the quality of biogas technology design and minimizing cost of installation by using locally available materials are the main suggestions of this research that help to create effective demand for biogas technology.Keywords: biogas technology, effective demand, probit model, tobit model, willingnes to pay
Procedia PDF Downloads 14027179 Evaluating the Total Costs of a Ransomware-Resilient Architecture for Healthcare Systems
Authors: Sreejith Gopinath, Aspen Olmsted
Abstract:
This paper is based on our previous work that proposed a risk-transference-based architecture for healthcare systems to store sensitive data outside the system boundary, rendering the system unattractive to would-be bad actors. This architecture also allows a compromised system to be abandoned and a new system instance spun up in place to ensure business continuity without paying a ransom or engaging with a bad actor. This paper delves into the details of various attacks we simulated against the prototype system. In the paper, we discuss at length the time and computational costs associated with storing and retrieving data in the prototype system, abandoning a compromised system, and setting up a new instance with existing data. Lastly, we simulate some analytical workloads over the data stored in our specialized data storage system and discuss the time and computational costs associated with running analytics over data in a specialized storage system outside the system boundary. In summary, this paper discusses the total costs of data storage, access, and analytics incurred with the proposed architecture.Keywords: cybersecurity, healthcare, ransomware, resilience, risk transference
Procedia PDF Downloads 13227178 The Effects of Above-Average Precipitation after Extended Drought on Phytoplankton in Southern California Surface Water Reservoirs
Authors: Margaret K. Spoo-Chupka
Abstract:
The Metropolitan Water District of Southern California (MWDSC) manages surface water reservoirs that are a source of drinking water for more than 19 million people in Southern California. These reservoirs experience periodic planktonic cyanobacteria blooms that can impact water quality. MWDSC imports water from two sources – the Colorado River (CR) and the State Water Project (SWP). The SWP brings supplies from the Sacramento-San Joaquin Delta that are characterized as having higher nutrients than CR water. Above average precipitation in 2017 after five years of drought allowed the majority of the reservoirs to fill. Phytoplankton was analyzed during the drought and after the drought at three reservoirs: Diamond Valley Lake (DVL), which receives SWP water exclusively, Lake Skinner, which can receive a blend of SWP and CR water, and Lake Mathews, which generally receives only CR water. DVL experienced a significant increase in water elevation in 2017 due to large SWP inflows, and there were no significant changes to total phytoplankton biomass, Shannon-Wiener diversity of the phytoplankton, or cyanobacteria biomass in 2017 compared to previous drought years despite the higher nutrient loads. The biomass of cyanobacteria that could potentially impact DVL water quality (Microcystis spp., Aphanizomenon flos-aquae, Dolichospermum spp., and Limnoraphis birgei) did not differ significantly between the heavy precipitation year and drought years. Compared to the other reservoirs, DVL generally has the highest concentration of cyanobacteria due to the water supply having greater nutrients. Lake Mathews’ water levels were similar in drought and wet years due to a reliable supply of CR water and there were no significant changes in the total phytoplankton biomass, phytoplankton diversity, or cyanobacteria biomass in 2017 compared to previous drought years. The biomass of cyanobacteria that could potentially impact water quality at Lake Mathews (L. birgei and Microcystis spp.) did not differ significantly between 2017 and previous drought years. Lake Mathews generally had the lowest cyanobacteria biomass due to the water supply having lower nutrients. The CR supplied most of the water to Lake Skinner during drought years, while the SWP was the primary source during 2017. This change in water source resulted in a significant increase in phytoplankton biomass in 2017, no significant change in diversity, and a significant increase in cyanobacteria biomass. Cyanobacteria that could potentially impact water quality at Skinner included: Microcystis spp., Dolichospermum spp., and A.flos-aquae. There was no significant difference in Microcystis spp. biomass in 2017 compared to previous drought years, but biomass of Dolichospermum spp. and A.flos-aquae were significantly greater in 2017 compared to previous drought years. Dolichospermum sp. and A. flos-aquae are two cyanobacteria that are more sensitive to nutrients than Microcystis spp., which are more sensitive to temperature. Patterns in problem cyanobacteria abundance among Southern California reservoirs as a result of above-average precipitation after more than five years of drought were most closely related to nutrient loading.Keywords: drought, reservoirs, cyanobacteria, and phytoplankton ecology
Procedia PDF Downloads 28527177 A DEA Model in a Multi-Objective Optimization with Fuzzy Environment
Authors: Michael Gidey Gebru
Abstract:
Most DEA models operate in a static environment with input and output parameters that are chosen by deterministic data. However, due to ambiguity brought on shifting market conditions, input and output data are not always precisely gathered in real-world scenarios. Fuzzy numbers can be used to address this kind of ambiguity in input and output data. Therefore, this work aims to expand crisp DEA into DEA with fuzzy environment. In this study, the input and output data are regarded as fuzzy triangular numbers. Then, the DEA model with fuzzy environment is solved using a multi-objective method to gauge the Decision Making Units’ efficiency. Finally, the developed DEA model is illustrated with an application on real data 50 educational institutions.Keywords: efficiency, DEA, fuzzy, decision making units, higher education institutions
Procedia PDF Downloads 5227176 A View of Flexible Housing in China
Authors: L. I. Shanshan
Abstract:
Beginning with the debate of concept, this essay explains the historical source and development of flexible housing in China. In the former part, the flexibility contained in traditional house is explored. While in the latter, the relevant practices in modern times are systematically analyzed as three phases–the Embryonic Period (1949 - 1980), the Systematic Practice (1981 - 2000), as well as the Integrated Trend and Prosperity (2001 - present). As a conclusion, the generalized flexibility is tentatively discussed.Keywords: flexibility, long-term effectiveness, variety, social background
Procedia PDF Downloads 28427175 Comparing Student Performance on Paper-Based versus Computer-Based Formats of Standardized Tests
Authors: Jin Koo
Abstract:
During the coronavirus pandemic, there has been a further increasing demand for computer-based tests (CBT), and now it has become an important test mode. The main purpose of this study is to investigate the comparability of student scores obtained from computerized-based formats of a standardized test in the two subject areas of reading and mathematics. Also, this study investigates whether there is an interaction effect between test modes of CBT and paper-based tests (PBT) and gender/ability level in each subject area. The test used in this study is a multiple-choice standardized test for students in grades 8-11. For this study, data were collected during four test administrations: 2015-16, 2017-18, and 2020-21. This research used a one-factor between-subjects ANOVA to compute the PBT and CBT groups’ test means for each subject area (reading and mathematics). Also, 2-factor between-subjects ANOVAs were conducted to investigate examinee characteristics: gender (male and female), ethnicity (African-American, Asian, Hispanic, multi-racial, and White), and ability level (low, average, and high-ability groups). The author found that students’ test scores in the two subject areas varied across CBT and PBT by gender and ability level, meaning that gender, ethnicity, and ability level were related to the score difference. These results will be discussed according to the current testing systems. In addition, this study’s results will open up to school teachers and test developers the possible influence that gender, ethnicity, and ability level have on a student’s score based on whether they take the CBT or PBT.Keywords: ability level, computer-based, gender, paper-based, test
Procedia PDF Downloads 10027174 Evaluating the Implementation of Public Procurement Principles at Tendering Stage: SME Contractors' Perspective
Authors: Charles Poleni Mukumba, Kahilu Kajimo-Shakantu
Abstract:
Purpose: Principles of public procurement are the foundation of good public procurement, representing best practices in delivering public services by the government and its organs. They provide guidance in the public procurement cycle to achieve the best value for public resources. Tendering stage in the procurement cycle is the most critical, as tendering information is made available to bidders. The paper evaluates the implementation of public procurement principles at the tendering stage. Design/Methodology/Approach: The research was conducted by using qualitative methods with 18 SME contractors in Lusaka as the sample. The samples are business owners and managers of purposively selected SME contractors. The collected data was analysed using thematic and content analysis. Findings: The findings indicate inconsistency in accessing information critical for tendering success by bidders. Further, the findings suggest that adjustments to technical specifications are made to suit certain preferred bidders by procuring officials. Research Limitations/Implications: The interviews were limited to SME contractors registered with the national council for construction and involved in public sector construction works in Lusaka, Zambia. Practical Implications: Implementing principles of public procurement at the tendering stage creates equal, open, and fair competition for the bidders in cost terms to deliver standardised and quality works to the public sector. Original/Value: The findings reveal how principles of public procurement play a critical role in enhancing the efficient performance of the procurement cycle at the tendering stage.Keywords: evaluating, implementation, public procurement principles, tendering stage, SME contractors
Procedia PDF Downloads 8427173 Data-Driven Decision Making: Justification of Not Leaving Class without It
Authors: Denise Hexom, Judith Menoher
Abstract:
Teachers and administrators across America are being asked to use data and hard evidence to inform practice as they begin the task of implementing Common Core State Standards. Yet, the courses they are taking in schools of education are not preparing teachers or principals to understand the data-driven decision making (DDDM) process nor to utilize data in a much more sophisticated fashion. DDDM has been around for quite some time, however, it has only recently become systematically and consistently applied in the field of education. This paper discusses the theoretical framework of DDDM; empirical evidence supporting the effectiveness of DDDM; a process a department in a school of education has utilized to implement DDDM; and recommendations to other schools of education who attempt to implement DDDM in their decision-making processes and in their students’ coursework.Keywords: data-driven decision making, institute of higher education, special education, continuous improvement
Procedia PDF Downloads 38727172 Quantile Coherence Analysis: Application to Precipitation Data
Authors: Yaeji Lim, Hee-Seok Oh
Abstract:
The coherence analysis measures the linear time-invariant relationship between two data sets and has been studied various fields such as signal processing, engineering, and medical science. However classical coherence analysis tends to be sensitive to outliers and focuses only on mean relationship. In this paper, we generalized cross periodogram to quantile cross periodogram and provide richer inter-relationship between two data sets. This is a general version of Laplace cross periodogram. We prove its asymptotic distribution under the long range process and compare them with ordinary coherence through numerical examples. We also present real data example to confirm the usefulness of quantile coherence analysis.Keywords: coherence, cross periodogram, spectrum, quantile
Procedia PDF Downloads 39027171 Interactive Glare Visualization Model for an Architectural Space
Authors: Florina Dutt, Subhajit Das, Matthew Swartz
Abstract:
Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis
Procedia PDF Downloads 35027170 Conception of a Predictive Maintenance System for Forest Harvesters from Multiple Data Sources
Authors: Lazlo Fauth, Andreas Ligocki
Abstract:
For cost-effective use of harvesters, expensive repairs and unplanned downtimes must be reduced as far as possible. The predictive detection of failing systems and the calculation of intelligent service intervals, necessary to avoid these factors, require in-depth knowledge of the machines' behavior. Such know-how needs permanent monitoring of the machine state from different technical perspectives. In this paper, three approaches will be presented as they are currently pursued in the publicly funded project PreForst at Ostfalia University of Applied Sciences. These include the intelligent linking of workshop and service data, sensors on the harvester, and a special online hydraulic oil condition monitoring system. Furthermore the paper shows potentials as well as challenges for the use of these data in the conception of a predictive maintenance system.Keywords: predictive maintenance, condition monitoring, forest harvesting, forest engineering, oil data, hydraulic data
Procedia PDF Downloads 14527169 Empowering Persons with Disabilities in Indonesia: Translating the Disability Law into Practice
Authors: Marthella Rivera Roidatua
Abstract:
Since the release of Convention on the Rights of Persons with Disabilities in 2006, disability became a mainstreamed global issue. Many developed countries have shown the continuous effort to improve their disability employment policy, for example, the US and the UK with their integrated support system through disability benefits. Relative little recent research on developing country is available. Surprisingly, Indonesia, just enacted the Law No.8/2016 on Disability that bravely highlighted on integrating disabled people into the workforce. It shows a positive progress shifting traditional perspective to what Tom Shakespeare’s concept of a social model of disability. But, the main question is how can this law support the disabled people to access and maintain paid work. Thus, besides the earlier literature reviews, interviews with leading sectors, Ministry of Social Affairs and Ministry of Manpower, was conducted to examine government’s attitude towards the disabled worker. Insights from two local social enterprises on disability were also engaged in building better perspective. The various source of data was triangulated then analysed with a thematic approach. Results were encouraging the Indonesian government to have a better collaboration with other impactful local organisations in promoting the disability employment. In the end, this paper also recommends the government to make a reasonable adjustment and practical guideline for companies in hiring disabled.Keywords: disability, employment, policy, Indonesia, collaboration, guidelines
Procedia PDF Downloads 24127168 Sampled-Data Control for Fuel Cell Systems
Authors: H. Y. Jung, Ju H. Park, S. M. Lee
Abstract:
A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control
Procedia PDF Downloads 56527167 Developing House’s Model to Assess the Translation of Key Cultural Texts
Authors: Raja Al-Ghamdi
Abstract:
This paper aims to systematically assess the translation of key cultural texts. The paper, therefore, proposes a modification of the discourse analysis model for translation quality assessment introduced by the linguist Juliane House (1977, 1997, 2015). The data for analysis has been chosen from a religious text that has never been investigated before. It is an overt translation of the biography of Prophet Mohammad. The book is written originally in Arabic and translated into English. A soft copy of the translation, entitled The Sealed Nectar, is posted on numerous websites including the Internet Archive library which offers a free access to everyone. The text abounds with linguistic and cultural phenomena relevant to Islamic and Arab lingua-cultural context which make its translation a challenge, as well as its assessment. Interesting findings show that (1) culturemes are rich points and both the translator’s subjectivity and intervention are apparent in mediating them, (2) given the nature of historical narration, the source text reflects the author’s positive shading, whereas the target text reflects the translator’s axiological orientation as neutrally shaded, and, (3) linguistic gaps, metaphorical expressions and intertextuality are major stimuli to compensation strategies.Keywords: Arabic-English discourse analysis, key cultural texts, overt translation, quality assessment
Procedia PDF Downloads 28227166 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 39027165 How Western Donors Allocate Official Development Assistance: New Evidence From a Natural Language Processing Approach
Authors: Daniel Benson, Yundan Gong, Hannah Kirk
Abstract:
Advancement in national language processing techniques has led to increased data processing speeds, and reduced the need for cumbersome, manual data processing that is often required when processing data from multilateral organizations for specific purposes. As such, using named entity recognition (NER) modeling and the Organisation of Economically Developed Countries (OECD) Creditor Reporting System database, we present the first geotagged dataset of OECD donor Official Development Assistance (ODA) projects on a global, subnational basis. Our resulting data contains 52,086 ODA projects geocoded to subnational locations across 115 countries, worth a combined $87.9bn. This represents the first global, OECD donor ODA project database with geocoded projects. We use this new data to revisit old questions of how ‘well’ donors allocate ODA to the developing world. This understanding is imperative for policymakers seeking to improve ODA effectiveness.Keywords: international aid, geocoding, subnational data, natural language processing, machine learning
Procedia PDF Downloads 7827164 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 5327163 Climate Change and Food Security in Nigeria: The World Bank Assisted Third National Fadama Development Programme (Nfdp Iii) Approach in Rivers State, Niger Delta, Nigeria
Authors: Temple Probyne Abali
Abstract:
Port Harcourt, Rivers State in the Niger Delta region of Nigeria is bedeviled by the phenomenon of climatechange, posing threat to food security and livelihood. This study examined a 4 decadel (1980-2020) trend of climate change as well as its socio-economic impact on food security in the region. Furthermore, to achieve sustainable food security and livelihood amidst the phenomenon, the study adopted the World Bank Assisted Third National Fadama Development Programme approach. The data source for climate change involved secondary data from Nigeria Meteorological Agency (NIMET). Consequently, the results for climate change over the 4decade period were displayed in tables, charts and maps for the expected changes. Data sources on socio-economic impact of food security and livelihood were acquired through questionnairedesign. A purposive random sampling technique was used in selecting 5 coastal communities inthe region known for viable economic potentials for agricultural development and the resultswere analyzed using Analysis of Variance (ANOVA). The Participatory Rural Appraisal (PRA) technique of the World Bank for needs assessment wasadopted in selecting 5 agricultural sub-project proposals/activities based on groups’ commoneconomic interest from a total of 1,000 farmers each drawn from the 5 communities of differentage groups including men, women, youths and the vulnerable. Based on the farmers’ sub-projectinterests, the various groups’ Strength, Weakness, Opportunities and Threats (SWOT), Problem Listing Matrix, Skill Gap Analysis as well as EIAson their sub-project proposals/activities were analyzed with substantialMonitoring and Evaluation (M & E), using the Specific, Measurable, Attribute, Reliable and Time bound (SMART)approach. Based on the findings from the PRA technique, the farmers recorded considerableincreaseinincomeofover200%withinthe5yearprojectplan(2008-2013).Thestudyrecommends capacity building and advisory services on this PRA innovation. By so doing, there would be a sustainable increase in agricultural production and assured food security in an environmental friendly manner, in line with the United Nation’s Sustainable Development Goals(SDGs).Keywords: climate change, food security, fadama, world bank, agriculture, sdgs
Procedia PDF Downloads 9327162 Crooked Wood: Finding Potential in Local Hardwood
Authors: Livia Herle
Abstract:
A large part of the Principality of Liechtenstein is covered by forest. Three-quarters of this forest is defined as protective due to the alpine landscape of the country, which is deteriorating the quality of the wood. Nevertheless, the forest is one of the most important sources of raw material. However, out of the wood harvested annually in Liechtenstein, about two-thirds are used directly as an energy source, drastically shortening up the carbon storage cycle of wood. Furthermore, due to climate change, forest structures are changing. Predictions for the forest in Liechtenstein have stated that the spruce will mostly vanish in low altitudes, only being able to survive in the higher regions. In contrast, hardwood species will experience a rise, resulting in a more mixed forest. Thus, the main research focus will be put upon the potential of hardwood as well as prolonging the lifespan of a timber log before ending up as an energy source. An analysis of the local occurrence of hardwood species and their quality will serve as a tool to implement this knowledge upon constructional solutions. As a system that works with short spam timber and thus qualifies for the regional conditions of hardwood, reciprocal frame systems will be further investigated. These can be defined as load-bearing structures with only two beams connecting at a time, avoiding complex joining situations. Furthermore, every beam is mutually supporting. This allows the usage of short pieces of preferably massive wood. As a result, the system permits for an easy assembly but also disassembly. To promote a more circular application of wood, possible cascading scenarios of the structural solutions will be added. In a workshop at the School of Architecture of the University of Liechtenstein in the Sommer Semester 2024, prototypes in 1:1 of reciprocal frame systems using only local hardwood will help as a tool to further test the theoretical analyses.Keywords: hardwood, cascading wood, reciprocal frames, crooked wood, forest structures, climate change
Procedia PDF Downloads 7427161 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano
Abstract:
A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA
Procedia PDF Downloads 25227160 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet
Authors: Wanjiku Karanja
Abstract:
Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook
Procedia PDF Downloads 12327159 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 20327158 Data Quality Enhancement with String Length Distribution
Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda
Abstract:
Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.Keywords: string classification, data quality, feature selection, probability distribution, string length
Procedia PDF Downloads 31827157 Stability Analysis of Tumor-Immune Fractional Order Model
Authors: Sadia Arshad, Yifa Tang, Dumitru Baleanu
Abstract:
A fractional order mathematical model is proposed that incorporate CD8+ cells, natural killer cells, cytokines and tumor cells. The tumor cells growth in the absence of an immune response is modeled by logistic law as it was the simplest form for which predictions also agreed with the experimental data. Natural Killer Cells are our first line of defense. NK cells directly kill tumor cells through several mechanisms, including the release of cytoplasmic granules containing perforin and granzyme, expression of tumor necrosis factor (TNF) family members. The effect of the NK cells on the tumor cell population is expressed with the product term. Rational form is used to describe interaction between CD8+ cells and tumor cells. A number of cytokines are produced by NKs, including tumor necrosis factor TNF, IFN, and interleukin (IL-10). Source term for cytokines is modeled by Michaelis-Menten form to indicate the saturated effects of the immune response. Stability of the equilibrium points is discussed for biologically significant values of bifurcation parameters. We studied the treatment of fractional order system by investigating analytical conditions of tumor eradication. Numerical simulations are presented to illustrate the analytical results.Keywords: cancer model, fractional calculus, numerical simulations, stability analysis
Procedia PDF Downloads 31527156 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data
Authors: Salam Khalifa, Naveed Ahmed
Abstract:
We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation
Procedia PDF Downloads 37327155 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation
Authors: Kiwon Yeom
Abstract:
Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.Keywords: change point, discontinuity, teleoperation, abrupt variation
Procedia PDF Downloads 16727154 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs
Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro
Abstract:
This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression
Procedia PDF Downloads 443