Search results for: data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25163

Search results for: data sets

24233 Li-Fi Technology: Data Transmission through Visible Light

Authors: Shahzad Hassan, Kamran Saeed

Abstract:

People are always in search of Wi-Fi hotspots because Internet is a major demand nowadays. But like all other technologies, there is still room for improvement in the Wi-Fi technology with regards to the speed and quality of connectivity. In order to address these aspects, Harald Haas, a professor at the University of Edinburgh, proposed what we know as the Li-Fi (Light Fidelity). Li-Fi is a new technology in the field of wireless communication to provide connectivity within a network environment. It is a two-way mode of wireless communication using light. Basically, the data is transmitted through Light Emitting Diodes which can vary the intensity of light very fast, even faster than the blink of an eye. From the research and experiments conducted so far, it can be said that Li-Fi can increase the speed and reliability of the transfer of data. This paper pays particular attention on the assessment of the performance of this technology. In other words, it is a 5G technology which uses LED as the medium of data transfer. For coverage within the buildings, Wi-Fi is good but Li-Fi can be considered favorable in situations where large amounts of data are to be transferred in areas with electromagnetic interferences. It brings a lot of data related qualities such as efficiency, security as well as large throughputs to the table of wireless communication. All in all, it can be said that Li-Fi is going to be a future phenomenon where the presence of light will mean access to the Internet as well as speedy data transfer.

Keywords: communication, LED, Li-Fi, Wi-Fi

Procedia PDF Downloads 332
24232 Poor Proficiency of English Language among Tertiary Level Students in Bangladesh and Its Effect on Employability: An Investigation to Find Facts and Solutions

Authors: Tanvir Ahmed, Nahian Fyrose Fahim, Subrata Majumder, Sarker Kibria

Abstract:

English is unanimously recognized as the standard second language in the world, and no one can deny this fact. Many people believe that possessing English proficiency skills is the key to communicating effectively globally, especially for developing countries, which can bring further success to itself on many fronts, as well as to other countries, by ensuring its people worldwide access to education, business, and technology. Bangladesh is a developing country of about 160 million people. A notable number of students in Bangladesh are currently pursuing higher education, especially at the tertiary or collegiate level, in more than 150 public and private universities. English is the dominant linguistic medium through which college instruction and lectures are given to students in Bangladesh. However, many of our students who have only completed their primary and secondary levels of education in the Bangla medium or language are generally in an awkward position to suddenly take and complete many unfamiliar requirements by the time they enter the university as freshmen. As students, they struggle to complete at least 18 courses to acquire proficiency in English. After obtaining a tertiary education certificate, the students could then have the opportunity to acquire a sustainable position in the job market industry; however, many of them do fail, unfortunately, because of poor English proficiency skills. Our study focuses on students in both public and private universities (N=150) as well as education experts (N=30) in Bangladesh. We had prepared two sets of questionnaires that were based upon a literature review on this subject, as we had also collected data and identified the reasons, and arrived at probable solutions to overcoming these problems. After statistical analysis, the study suggested certain remedial measures that could be taken in order to increase student's proficiency in English as well as to ensure their employability potential.

Keywords: tertiary education, English language proficiency, employability, unemployment problems

Procedia PDF Downloads 90
24231 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem

Authors: Renata Kurpiewska-Korbut

Abstract:

Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.

Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine

Procedia PDF Downloads 83
24230 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 128
24229 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System

Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu

Abstract:

Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.

Keywords: communication, GEO satellite, data relay system, coverage

Procedia PDF Downloads 430
24228 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product

Authors: Tanawat Hongthai, Dusit Thanapatay

Abstract:

This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.

Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC

Procedia PDF Downloads 269
24227 High-Risk Gene Variant Profiling Models Ethnic Disparities in Diabetes Vulnerability

Authors: Jianhua Zhang, Weiping Chen, Guanjie Chen, Jason Flannick, Emma Fikse, Glenda Smerin, Yanqin Yang, Yulong Li, John A. Hanover, William F. Simonds

Abstract:

Ethnic disparities in many diseases are well recognized and reflect the consequences of genetic, behavior, and environmental factors. However, direct scientific evidence connecting the ethnic genetic variations and the disease disparities has been elusive, which may have led to the ethnic inequalities in large scale genetic studies. Through the genome-wide analysis of data representing 185,934 subjects, including 14,955 from our own studies of the African America Diabetes Mellitus, we discovered sets of genetic variants either unique to or conserved in all ethnicities. We further developed a quantitative gene function-based high-risk variant index (hrVI) of 20,428 genes to establish profiles that strongly correlate with the subjects' self-identified ethnicities. With respect to the ability to detect human essential and pathogenic genes, the hrVI analysis method is both comparable with and complementary to the well-known genetic analysis methods, pLI and VIRlof. Application of the ethnicity-specific hrVI analysis to the type 2 diabetes mellitus (T2DM) national repository, containing 20,791 cases and 24,440 controls, identified 114 candidate T2DM-associated genes, 8.8-fold greater than that of ethnicity-blind analysis. All the genes identified are defined as either pathogenic or likely-pathogenic in ClinVar database, with 33.3% diabetes-associated and 54.4% obesity-associated genes. These results demonstrate the utility of hrVI analysis and provide the first genetic evidence by clustering patterns of how genetic variations among ethnicities may impede the discovery of diabetes and foreseeably other disease-associated genes.

Keywords: diabetes-associated genes, ethnic health disparities, high-risk variant index, hrVI, T2DM

Procedia PDF Downloads 130
24226 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 108
24225 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 355
24224 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 184
24223 Calcitonin gene-related peptide Receptor Antagonists for Chronic Migraine – Real World Outcomes

Authors: B. J. Mahen, N. E. Lloyd-Gale, S. Johnson, W. P. Rakowicz, M. J. Harris, A. D. Miller

Abstract:

Background: Migraine is a leading cause of disability in the world. Calcitonin gene-related peptide (CGRP) receptor antagonists offer an approach to migraine prophylaxis by inhibiting the inflammatory and vasodilatory effects of CGRP. In recent years, NICE licensed the use of three CGRP-receptor antagonists: Fremanezumab, Galcanezumab, and Erenumab. Here, we present the outcomes of CGRP-antagonist treatment in a cohort of patients who suffer from episodic or chronic migraine and have failed at least three oral prophylactic therapies. Methods: We offered CGRP antagonists to 86 patients who met the NICE criteria to start therapy. We recorded the number of headache days per month (HDPM) at 0 weeks, 3 months, and 12 months. Of those, 26 patients were switched to an alternative treatment due to poor response or side effects. Of the 112 total cases, 9 cases did not sufficiently maintain their headache diary, and 5 cases were not followed up at 3 months. We have therefore included 98 sets of data in our analysis. Results: Fremanezumab achieved a reduction in HDPM by 51.7% at 3 months (p<0.0001), with 63.7% of patients meeting NICE criteria to continue therapy. Patients trialed on Galcanezumab attained a reduction in HDPM by 47.0% (p=0.0019), with 51.6% of patients meeting NICE criteria to continue therapy. Erenumab, however, only achieved a reduction in HDPM by 17.0% (p=0.29), and this was not statistically significant. Furthermore, 34.4%, 9.7%, and 4.9% of patients taking Fremanezumab, Galcanezumab, and Erenumab, respectively, continued therapy beyond 12 months. Of those who attempted drug holidays following 12 months of treatment, migraine symptoms relapsed in 100% of cases. Conclusion: We observed a significant improvement in HDPM amongst episodic and chronic migraine patients following treatment with Fremanezumab or Galcanezumab.

Keywords: migraine, CGRP, fremanezumab, galcanezumab, erenumab

Procedia PDF Downloads 88
24222 Inferring Cognitive Skill in Concept Space

Authors: Rania A. Aboalela, Javed I. Khan

Abstract:

This research presents a learning assessment theory of Cognitive Skill in Concept Space (CS2) to measure the assessed knowledge in terms of cognitive skill levels of the concepts. The cognitive skill levels refer to levels such as if a student has acquired the state at the level of understanding, or applying, or analyzing, etc. The theory is comprised of three constructions: Graph paradigm of a semantic/ ontological scheme, the concept states of the theory and the assessment analytics which is the process to estimate the sets of concept state at a certain skill level. Concept state means if a student has already learned, or is ready to learn, or is not ready to learn a certain skill level. The experiment is conducted to prove the validation of the theory CS2.

Keywords: cognitive skill levels, concept states, concept space, knowledge assessment theory

Procedia PDF Downloads 315
24221 Concrete-Wall-Climbing Testing Robot

Authors: S. Tokuomi, K. Mori, Y. Tsuruzono

Abstract:

A concrete-wall-climbing testing robot, has been developed. This robot adheres and climbs concrete walls using two sets of suction cups, as well as being able to rotate by the use of the alternating motion of the suction cups. The maximum climbing speed is about 60 cm/min. Each suction cup has a pressure sensor, which monitors the adhesion of each suction cup. The impact acoustic method is used in testing concrete walls. This robot has an impact acoustic device and four microphones for the acquisition of the impact sound. The effectiveness of the impact acoustic system was tested by applying it to an inspection of specimens with artificial circular void defects. A circular void defect with a diameter of 200 mm at a depth of 50 mm was able to be detected. The weight and the dimensions of the robot are about 17 kg and 1.0 m by 1.3 m, respectively. The upper limit of testing is about 10 m above the ground due to the length of the power cable.

Keywords: concrete wall, nondestructive testing, climbing robot, impact acoustic method

Procedia PDF Downloads 650
24220 Techno-Economic Optimization and Evaluation of an Integrated Industrial Scale NMC811 Cathode Active Material Manufacturing Process

Authors: Usama Mohamed, Sam Booth, Aliysn J. Nedoma

Abstract:

As part of the transition to electric vehicles, there has been a recent increase in demand for battery manufacturing. Cathodes typically account for approximately 50% of the total lithium-ion battery cell cost and are a pivotal factor in determining the viability of new industrial infrastructure. Cathodes which offer lower costs whilst maintaining or increasing performance, such as nickel-rich layered cathodes, have a significant competitive advantage when scaling up the manufacturing process. This project evaluates the techno-economic value proposition of an integrated industrial scale cathode active material (CAM) production process, closing the mass and energy balances, and optimizing the operation conditions using a sensitivity analysis. This is done by developing a process model of a co-precipitation synthesis route using Aspen Plus software and validated based on experimental data. The mechanism chemistry and equilibrium conditions were established based on previous literature and HSC-Chemistry software. This is then followed by integrating the energy streams, adding waste recovery and treatment processes, as well as testing the effect of key parameters (temperature, pH, reaction time, etc.) on CAM production yield and emissions. Finally, an economic analysis estimating the fixed and variable costs (including capital expenditure, labor costs, raw materials, etc.) to calculate the cost of CAM ($/kg and $/kWh), total plant cost ($) and net present value (NPV). This work sets the foundational blueprint for future research into sustainable industrial scale processes for CAM manufacturing.

Keywords: cathodes, industrial production, nickel-rich layered cathodes, process modelling, techno-economic analysis

Procedia PDF Downloads 92
24219 Temporal Changes Analysis (1960-2019) of a Greek Rural Landscape

Authors: Stamatia Nasiakou, Dimitrios Chouvardas, Michael Vrahnakis, Vassiliki Kleftoyanni

Abstract:

Recent research in the mountainous and semi-mountainous rural landscapes of Greece shows that they have been significantly changed over the last 80 years. These changes have the form of structural modification of land cover/use patterns, with the main characteristic being the extensive expansion of dense forests and shrubs at the expense of grasslands and extensive agricultural areas. The aim of this research was to study the 60-year changes (1960-2019) of land cover/ use units in the rural landscape of Mouzaki (Karditsa Prefecture, central Greece). Relevant cartographic material such as forest land use maps, digital maps (Corine Land Cover -2018), 1960 aerial photos from Hellenic Military Geographical Service, and satellite imagery (Google Earth Pro 2014, 2016, 2017 and 2019) was collected and processed in order to study landscape evolution. ArcGIS v 10.2.2 software was used to process the cartographic material and to produce several sets of data. Main product of the analysis was a digitized photo-mosaic of the 1960 aerial photographs, a digitized photo-mosaic of recent satellite images (2014, 2016, 2017 and 2019), and diagrams and maps of temporal transformation of the rural landscape (1960 – 2019). Maps and diagrams were produced by applying photointerpretation techniques and a suitable land cover/ use classification system on the two photo-mosaics. Demographic and socioeconomic inventory data was also collected mainly from diachronic census reports of the Hellenic Statistical Authority and local sources. Data analysis of the temporal transformation of land cover/ use units showed that they are mainly located in the central and south-eastern part of the study area, which mainly includes the mountainous part of the landscape. The most significant change is the expansion of the dense forests that currently dominate the southern and eastern part of the landscape. In conclusion, the produced diagrams and maps of the land cover/ use evolution suggest that woody vegetation in the rural landscape of Mouzaki has significantly increased over the past 60 years at the expense of the open areas, especially grasslands and agricultural areas. Demographic changes, land abandonment and the transformation of traditional farming practices (e.g. agroforestry) were recognized as the main cause of the landscape change. This study is part of a broader research project entitled “Perspective of Agroforestry in Thessaly region: A research on social, environmental and economic aspects to enhance farmer participation”. The project is funded by the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI).

Keywords: Agroforestry, Forest expansion, Land cover/ use changes, Mountainous and semi-mountainous areas

Procedia PDF Downloads 95
24218 The Dialectic of Law and Politics for George Friedrich Wilhelm Hegel

Authors: Djehich Mohamed Yousri

Abstract:

This paper aims to address the dialectic of law and politics in the philosophy of the state of the philosopher Hegel by addressing the concept of law, which refers to its general meaning to the set of rules and legislation that man sets to apply them within society, as it is considered one of the primary and necessary conditions for the functioning of And organizing social life, when it defines the rights and duties of every individual belonging to the state, by approaching it with central concepts in political philosophy, such as the state, freedom and the people. The most prominent result that we reached through our analysis of the details of the problematic research is the relationship between law and politics in the philosophical system of Hegel; on the one hand, We find that the state is rational only to the extent that it resorts to the law and works under it, and the latter does not realize its essence and effectiveness unless it is extracted from the customs, traditions, and culture of the people so that it does not conflict with the ideal goal of its existence, which is to achieve freedom and protect it from all possible. A state does not mean at all to reduce the freedom of the people, so there is no conflict between law and freedom.

Keywords: hegel, the law, country, freedom, citizen

Procedia PDF Downloads 71
24217 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.

Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making

Procedia PDF Downloads 65
24216 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 66
24215 Maintaining Experimental Consistency in Geomechanical Studies of Methane Hydrate Bearing Soils

Authors: Lior Rake, Shmulik Pinkert

Abstract:

Methane hydrate has been found in significant quantities in soils offshore within continental margins and in permafrost within arctic regions where low temperature and high pressure are present. The mechanical parameters for geotechnical engineering are commonly evaluated in geomechanical laboratories adapted to simulate the environmental conditions of methane hydrate-bearing sediments (MHBS). Due to the complexity and high cost of natural MHBS sampling, most laboratory investigations are conducted on artificially formed samples. MHBS artificial samples can be formed using different hydrate formation methods in the laboratory, where methane gas and water are supplied into the soil pore space under the methane hydrate phase conditions. The most commonly used formation method is the excess gas method which is considered a relatively simple, time-saving, and repeatable testing method. However, there are several differences in the procedures and techniques used to produce the hydrate using the excess gas method. As a result of the difference between the test facilities and the experimental approaches that were carried out in previous studies, different measurement criteria and analyses were proposed for MHBS geomechanics. The lack of uniformity among the various experimental investigations may adversely impact the reliability of integrating different data sets for unified mechanical model development. In this work, we address some fundamental aspects relevant to reliable MHBS geomechanical investigations, such as hydrate homogeneity in the sample, the hydrate formation duration criterion, the hydrate-saturation evaluation method, and the effect of temperature measurement accuracy. Finally, a set of recommendations for repeatable and reliable MHBS formation will be suggested for future standardization of MHBS geomechanical investigation.

Keywords: experimental study, laboratory investigation, excess gas, hydrate formation, standardization, methane hydrate-bearing sediment

Procedia PDF Downloads 49
24214 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 154
24213 The Digital Desert in Global Business: Digital Analytics as an Oasis of Hope for Sub-Saharan Africa

Authors: David Amoah Oduro

Abstract:

In the ever-evolving terrain of international business, a profound revolution is underway, guided by the swift integration and advancement of disruptive technologies like digital analytics. In today's international business landscape, where competition is fierce, and decisions are data-driven, the essence of this paper lies in offering a tangible roadmap for practitioners. It is a guide that bridges the chasm between theory and actionable insights, helping businesses, investors, and entrepreneurs navigate the complexities of international expansion into sub-Saharan Africa. This practitioner paper distils essential insights, methodologies, and actionable recommendations for businesses seeking to leverage digital analytics in their pursuit of market entry and expansion across the African continent. What sets this paper apart is its unwavering focus on a region ripe with potential: sub-Saharan Africa. The adoption and adaptation of digital analytics are not mere luxuries but essential strategic tools for evaluating countries and entering markets within this dynamic region. With the spotlight firmly fixed on sub-Saharan Africa, the aim is to provide a compelling resource to guide practitioners in their quest to unearth the vast opportunities hidden within sub-Saharan Africa's digital desert. The paper illuminates the pivotal role of digital analytics in providing a data-driven foundation for market entry decisions. It highlights the ability to uncover market trends, consumer behavior, and competitive landscapes. By understanding Africa's incredible diversity, the paper underscores the importance of tailoring market entry strategies to account for unique cultural, economic, and regulatory factors. For practitioners, this paper offers a set of actionable recommendations, including the creation of cross-functional teams, the integration of local expertise, and the cultivation of long-term partnerships to ensure sustainable market entry success. It advocates for a commitment to continuous learning and flexibility in adapting strategies as the African market evolves. This paper represents an invaluable resource for businesses, investors, and entrepreneurs who are keen on unlocking the potential of digital analytics for informed market entry in Africa. It serves as a guiding light, equipping practitioners with the essential tools and insights needed to thrive in this dynamic and diverse continent. With these key insights, methodologies, and recommendations, this paper is a roadmap to prosperous and sustainable market entry in Africa. It is vital for anyone looking to harness the transformational potential of digital analytics to create prosperous and sustainable ventures in a region brimming with promise. In the ever-advancing digital age, this practitioner paper becomes a lodestar, guiding businesses and visionaries toward success amidst the unique challenges and rewards of sub-Saharan Africa's international business landscape.

Keywords: global analytics, digital analytics, sub-Saharan Africa, data analytics

Procedia PDF Downloads 64
24212 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 102
24211 Regularity and Maximal Congruence in Transformation Semigroups with Fixed Sets

Authors: Chollawat Pookpienlert, Jintana Sanwong

Abstract:

An element a of a semigroup S is called left (right) regular if there exists x in S such that a=xa² (a=a²x) and said to be intra-regular if there exist u,v in such that a=ua²v. Let T(X) be the semigroup of all full transformations on a set X under the composition of maps. For a fixed nonempty subset Y of X, let Fix(X,Y)={α ™ T(X) : yα=y for all y ™ Y}, where yα is the image of y under α. Then Fix(X,Y) is a semigroup of full transformations on X which fix all elements in Y. Here, we characterize left regular, right regular and intra-regular elements of Fix(X,Y) which characterizations are shown as follows: For α ™ Fix(X,Y), (i) α is left regular if and only if Xα\Y = Xα²\Y, (ii) α is right regular if and only if πα = πα², (iii) α is intra-regular if and only if | Xα\Y | = | Xα²\Y | such that Xα = {xα : x ™ X} and πα = {xα⁻¹ : x ™ Xα} in which xα⁻¹ = {a ™ X : aα=x}. Moreover, those regularities are equivalent if Xα\Y is a finite set. In addition, we count the number of those elements of Fix(X,Y) when X is a finite set. Finally, we determine the maximal congruence ρ on Fix(X,Y) when X is finite and Y is a nonempty proper subset of X. If we let | X \Y | = n, then we obtain that ρ = (Fixn x Fixn) ∪ (H ε x H ε) where Fixn = {α ™ Fix(X,Y) : | Xα\Y | < n} and H ε is the group of units of Fix(X,Y). Furthermore, we show that the maximal congruence is unique.

Keywords: intra-regular, left regular, maximal congruence, right regular, transformation semigroup

Procedia PDF Downloads 223
24210 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 414
24209 An Application of Quantile Regression to Large-Scale Disaster Research

Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede

Abstract:

Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.

Keywords: disaster workers, post traumatic stress, PTSD, quantile regression

Procedia PDF Downloads 277
24208 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 55
24207 The Influence of Celebrity Endorsement on Consumers’ Attitude and Purchas Intention Towards Skincare Products in Malaysia

Authors: Tew Leh Ghee

Abstract:

The study's goal is to determine how celebrity endorsement affects Malaysian consumers' attitudes and intentions to buy skincare products. Since customers now largely rely on celebrity endorsement to influence purchasing decisions in almost every business, celebrity endorsement is not, in reality, a new phenomenon. Even though the market for skincare products has a vast potential to be exploited, corporations have yet to seize this niche via celebrity endorsement. Basically, there hasn't been much study done to recognize the significance of celebrity endorsement in this industry. This research combined descriptive and quantitative methods with a self-administered survey as the primary data-gathering tool. All of the characteristics under study were measured using a 5-point Likert scale, and the questionnaire was written in English. A convenience sample method was used to choose respondents, and 360 sets of valid questionnaires were gathered for the study's statistical analysis. Preliminary statistical analyses were analyzed using SPSS version 20.0 (Statistical Package for the Social Sciences). The backdrop of the respondents' demographics was examined using descriptive analysis. All concept assessments' validity and reliability were examined using exploratory factor analysis, item-total statistics, and reliability statistics. Pearson correlation and regression analysis were used, respectively, to assess relationships and impacts between the variables under study. The research showed that, apart from competence, celebrity endorsements of skincare products in Malaysia had a favorable impact on attitudes and purchase intentions as evaluated by attractiveness and dependability. The research indicated that the most significant element influencing attitude and buy intention was the credibility of a celebrity endorsement. The study offered implications in order to provide potential improvements of celebrity endorsement in skincare goods in Malaysia. The study's last portion includes its limits and ideas for the future.

Keywords: trustworthiness, influential, phenomenon, celebrity emdorsement

Procedia PDF Downloads 70
24206 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support

Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz

Abstract:

The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.

Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.

Procedia PDF Downloads 107
24205 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 361
24204 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 421