Search results for: data center
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26452

Search results for: data center

25552 Exploring the Feasibility of Utilizing Blockchain in Cloud Computing and AI-Enabled BIM for Enhancing Data Exchange in Construction Supply Chain Management

Authors: Tran Duong Nguyen, Marwan Shagar, Qinghao Zeng, Aras Maqsoodi, Pardis Pishdad, Eunhwa Yang

Abstract:

Construction supply chain management (CSCM) involves the collaboration of many disciplines and actors, which generates vast amounts of data. However, inefficient, fragmented, and non-standardized data storage often hinders this data exchange. The industry has adopted building information modeling (BIM) -a digital representation of a facility's physical and functional characteristics to improve collaboration, enhance transmission security, and provide a common data exchange platform. Still, the volume and complexity of data require tailored information categorization, aligning with stakeholders' preferences and demands. To address this, artificial intelligence (AI) can be integrated to handle this data’s magnitude and complexities. This research aims to develop an integrated and efficient approach for data exchange in CSCM by utilizing AI. The paper covers five main objectives: (1) Investigate existing framework and BIM adoption; (2) Identify challenges in data exchange; (3) Propose an integrated framework; (4) Enhance data transmission security; and (5) Develop data exchange in CSCM. The proposed framework demonstrates how integrating BIM and other technologies, such as cloud computing, blockchain, and AI applications, can significantly improve the efficiency and accuracy of data exchange in CSCM.

Keywords: construction supply chain management, BIM, data exchange, artificial intelligence

Procedia PDF Downloads 26
25551 Representation Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction

Procedia PDF Downloads 428
25550 Data Mining As A Tool For Knowledge Management: A Review

Authors: Maram Saleh

Abstract:

Knowledge has become an essential resource in today’s economy and become the most important asset of maintaining competition advantage in organizations. The importance of knowledge has made organizations to manage their knowledge assets and resources through all multiple knowledge management stages such as: Knowledge Creation, knowledge storage, knowledge sharing and knowledge use. Researches on data mining are continues growing over recent years on both business and educational fields. Data mining is one of the most important steps of the knowledge discovery in databases process aiming to extract implicit, unknown but useful knowledge and it is considered as significant subfield in knowledge management. Data miming have the great potential to help organizations to focus on extracting the most important information on their data warehouses. Data mining tools and techniques can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This review paper explores the applications of data mining techniques in supporting knowledge management process as an effective knowledge discovery technique. In this paper, we identify the relationship between data mining and knowledge management, and then focus on introducing some application of date mining techniques in knowledge management for some real life domains.

Keywords: Data Mining, Knowledge management, Knowledge discovery, Knowledge creation.

Procedia PDF Downloads 208
25549 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 54
25548 The Effect of Parathyroid Hormone on Aldosterone Secretion in Patients with Primary Hyperparathyroidism

Authors: Branka Milicic Stanic, Romana Mijovic

Abstract:

In primary hyperparathyroidism, an increased risk of developing cardiovascular disease may exist due to increased activity of the renin-angiotensin-aldosterone system (RAAS). In adenomatous altered tissue of parathyroid gland, compared to normal tissue, there are two to fourfold increase in the expression of type 1 angiotensin II receptors. As there is a clear evidence of the independent role of aldosterone on the cardiovascular system, the aim of this study was to evaluate the existence of an association between aldosterone secretion and parathyroid hormone in patients with primary hyperparathyroidism. This study included 48 patients with elevated parathyroid hormone who had come to the Departement of Nuclear Medicine, Clinical Center of Vojvodina, for Parathyroid Scintigraphy. The control group consisted of 30 healthy subjects who matched age and gender to the study group. All the results were statistically processed by statistical package STATISTICA 14 (Statsoft Inc,Tulsa, OK, USA). The survey was conducted between February 2017 and April 2018 at the Departement of Nuclear Medicine and at the Departement for Endocinology Diagnoistics, in Clinical Center of Vojvodina, Novi Sad. Compared to the control group, the study group had statistically significantly higher values of aldosterone (p=0.028), total calcium (p=0.01), ionized calcium (p=0.003) and parathyroid hormone (N-TACT PTH) (p=0.00), while statistically a significant lower levels in the study group were for phosphorus (p=0.003) and vitamin D (p=0.04). A linear correlation analysis in the study group revealed a statistically significant degree of positive correlation between renin and N-TACT PTH (r=0.688, p<0.05); renin and calcium (r=0.673, p<0.05) and renin and ionized calcium (r=0.641, p<0.05). Serum aldosterone and parathyroid hormone levels (N-TACT) were correlated positively in patients with primary hyperparathyroidism (r=0.509, p<0.05). According to the linear correlation analysis in the control group, aldosterone showed no positive correlation with N-TACT PTH (r=-0.285, p>0.05), as well as total and ionized calcium (r=-0.200, p>0.05; r=-0.313, p>0.05). In multivariate regression analysis of the study group, the strongest predictive variable of aldosterone secretion was N-TACT PTH (p=0.011). Aldosterone correlated positively to PTH levels in patients with primary hyperparathyroidism, and the fact is that in these patients aldosterone might be a key mediator of cardiovascular symptoms. All this knowledge should help to find new treatments to prevent cardiovascular disease.

Keywords: aldosterone, hyperparathyroidism, parathyroid hormone, parathyroid gland

Procedia PDF Downloads 140
25547 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme

Procedia PDF Downloads 480
25546 Risk Factors and Outcome of Free Tissue Transfer at a Tertiary Care Referral Center

Authors: Majid Khan

Abstract:

Introduction: In this era of microsurgery, free flap holds a remarkable spot in reconstructive surgery. A free flap is well suited for composite defects as it provides sufficient and well-vascularized tissue for coverage. We report our experience with the use of the free flaps for the reconstruction of composite defects. Methods: This is a retrospective case series (chart review) of patients who underwent reconstruction of composite defects with a free flap at Aga Khan University Hospital, Karachi (Pakistan) from January 01, 2015, to December 31, 2019. Data were collected for patient demographics, size of the defect, size of flap, recipient vessels, postoperative complications, and outcome of the free flap. Results: Over this period, 532 free flaps are included in this study. The overall success rate is 95.5%. The mean age of the patient was 44.86 years. In 532 procedures, there were 448 defects from tumor ablation of head and neck cancer. The most frequent free flap was the anterolateral thigh flap in 232 procedures. In this study, the risk factor hypertension (p=0.004) was found significant for wound dehiscence, preop radiation/chemotherapy (p=0.003), and malnutrition (p=0.005) were found significant for fistula formation. Malnutrition (p=0.02) and use of vein grafts (p=0.025) were significant factors for flap failure. Conclusion: Free tissue transfer is a reliable option for the reconstruction of large and composite defects. Hypertension, malnutrition, and preoperative radiotherapy can cause significant morbidity.

Keywords: free flap, free flap failure, risk factors for flap failure, free flap outcome

Procedia PDF Downloads 113
25545 Impact of Meteorological Factors on Influenza Activity in Pakistan; A Tale of Two Cities

Authors: Nadia Nisar

Abstract:

Background: In the temperate regions Influenza activities occur sporadically all year round with peaks coinciding during cold months. Meteorological and environmental conditions play significant role in the transmission of influenza globally. In this study, we assessed the relationship between meteorological parameters and influenza activity in two geographical areas of Pakistan. Methods: Influenza data were collected from Islamabad (north) and Multan (south) regions of national influenza surveillance system during 2010-2015. Meteorological database was obtained from National Climatic Data Center (Pakistan). Logistic regression model with a stepwise approach was used to explore the relationship between meteorological parameters with influenza peaks. In statistical model, we used the weekly proportion of laboratory-confirmed influenza positive samples to represent Influenza activity with metrological parameters as the covariates (temperature, humidity and precipitation). We also evaluate the link between environmental conditions associated with seasonal influenza epidemics: 'cold-dry' and 'humid-rainy'. Results: We found that temperature and humidity was positively associated with influenza in north and south both locations (OR = 0.927 (0.88-0.97)) & (OR = 0.1.078 (1.027-1.132)) and (OR = 1.023 (1.008-1.037)) & (OR = 0.978 (0.964-0.992)) respectively, whilst precipitation was negatively associated with influenza (OR = 1.054 (1.039-1.070)) & (OR = 0.949 (0.935-0.963)). In both regions, temperature and humidity had the highest contribution to the model as compared to the precipitation. We revealed that the p-value for all of climate parameters is <0.05 by Independent-sample t-test. These results demonstrate that there were significant relationships between climate factors and influenza infection with correlation coefficients: 0.52-0.90. The total contribution of these three climatic variables accounted for 89.04%. The reported number of influenza cases increased sharply during the cold-dry season (i.e., winter) when humidity and temperature are at minimal levels. Conclusion: Our findings showed that measures of temperature, humidity and cold-dry season (winter) can be used as indicators to forecast influenza infections. Therefore integrating meteorological parameters for influenza forecasting in the surveillance system may benefit the public health efforts in reducing the burden of seasonal influenza. More studies are necessary to understand the role of these parameters in the viral transmission and host susceptibility process.

Keywords: influenza, climate, metrological, environmental

Procedia PDF Downloads 200
25544 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 387
25543 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management

Authors: Kenneth Harper

Abstract:

The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.

Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups

Procedia PDF Downloads 28
25542 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification

Procedia PDF Downloads 516
25541 Economic Valuation of Environmental Services Sustained by Flamboyant Park in Goiania-Go, Brazil

Authors: Brenda R. Berca, Jessica S. Vieira, Lucas G. Candido, Matheus C. Ferreira, Paulo S. A. Lopes Filho, Rafaella O. Baracho

Abstract:

This study aims to estimate the economic value environmental services sustained by Flamboyant Lourival Louza Municipal Park in Goiânia, Goiás, Brazil. The Flamboyant Park is one of the most relevant urban parks, and it is located near a stadium, a shopping center, and two supercenters. In order to define the methods used for the valuation of Flamboyant Park, the first step was carrying out bibliographical research with the view to better understand which method is most feasible to valuate the Park. Thus, the following direct methods were selected: travel cost, hedonic pricing, and contingent valuation. In addition, an indirect method (replacement cost) was applied at Flamboyant Park. The second step was creating and applying two surveys. The first survey aimed at the visitors of the park, addressing socio-economic issues, the use of the Park, as well as its importance and the willingness the visitors, had to pay for its existence. The second survey was destined to the existing trade in the Park, in order to collect data regarding the profits obtained by them. In the end, the characterization of the profile of the visitors and the application of the methods of contingent valuation, travel cost, replacement cost and hedonic pricing were obtained, thus monetarily valuing the various ecosystem services sustained by the park. Some services were not valued due to difficulties encountered during the process.

Keywords: contingent valuation, ecosystem services, economic environmental valuation, hedonic pricing, travel cost

Procedia PDF Downloads 226
25540 A Retrospective Study of Suicidal Deaths in Madinah for Ten Years

Authors: Radah Yousuf, Ashraf Shebl

Abstract:

Suicide is a tragic event with strong emotional repercussions for its survivors and for families of its victims. There were thousands of cases all over the world. There are many risk factors include mental disorders such as depression, and substance abuse, including alcoholism and use of benzodiazepines. Other suicides are impulsive acts due to stress such as from financial difficulties, troubles with relationships, or from bullying. The aim of work in this study is making a survey from archives of the suicidal cases, which had a medicolegal examination, in forensic medicine center in Al Madinah Almunawarah-KSA, for ten years in the period between 1428-1438h. In each case, some data are collected such as age, sex, time and place of an act, method of suicide, the presence of the witness, medical history. This study demonstrates that suicide is more common in male than female, and the 4th decade was the most period of age. The most common method of suicide was hanging followed by falling from the height. These results indicated that cultural and religious beliefs that discourage suicide and support self-preservation instinct, and suicide education programs provide information to students in high school, builds awareness, one of the most important issues in solving that problem. From the forensic view, circumstantial evidence of every forensic case must take and record, full history about the social, medical and psychological problems, attend the scene of death is a very important, complete medicolegal investigation for every case, and full autopsy with very skilled techniques and facilities can help in diagnosing what type of crimes.

Keywords: suicide, age, sex, hanging

Procedia PDF Downloads 148
25539 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R

Procedia PDF Downloads 379
25538 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders

Authors: Sven Gehrke, Johannes Ruhland

Abstract:

Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.

Keywords: trust, data mining, CRISP DM, stakeholder management

Procedia PDF Downloads 94
25537 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 490
25536 Interlanguage Acquisition of a Postposition ‘e’ in Korean: Analysis of the Korean Novice Learners’ Output

Authors: Eunjung Lee

Abstract:

This study aims to analyze the sentences generated by the beginners who learn ‘e,’ a postposition in Korean and to find out the regularity of learners’ interlanguage upon investigating the usages of ‘e’ that appears by meanings and functions in their interlanguage, and conditions that ‘e’ is used. This study was conducted with mainly two assumptions; first, the learner’s language has the specific type of interlanguage; and second, there is the regularity of interlanguage when students produce ‘e’ under the specific conditions. Learners’ output has various values and can be used as the useful data to understand interlanguage. Therefore, all the sentences containing a postposition ‘e’ by English speaking learners were searched in ‘Learners’ corpus sharing center in The National Institute of Korean Language’ in Korea, and the data were collected upon limiting the levels of learners with Level 1 and 2. 789 sentences that were used with ‘e’ were selected as the final subjects of the analysis. First, to understand the environmental characteristics to be used with a postposition, ‘e’ after summarizing 13 meaning and functions of ‘e’ appeared in three books of Korean dictionary that summarized the Korean grammar, 1) meaning function of ‘e’ that were used in each sentence was classified; 2) the nouns that were combined with ‘e,’ keywords of the sentences, and the characteristics of modifiers, linkers, and predicates appeared in front of ‘e’ were analyzed; 3) the regularity by the novice learners’ meaning and functions were reviewed; and 4) the differences of the regularity by level 1 and 2 learners’ meaning and functions were found. Upon the study results, the novice learners showed 1) they used the nouns related to ‘time(시간), before(전), after(후), next(다음), the next(그다음), then(때), day of the week(요일), and season(계절)’ mainly in front of ‘e’ when they used ‘e’ as the meaning function of time; 2) they used mainly the verbs of ‘go(가다),’ ‘come(오다),’ and ‘go round(다니다)’ as the predicate to match with ‘e’ that was the meaning function of direction and destination; and 3) they used mainly the nouns related to ‘locations or countries’ in front of ‘e,’ a meaning function postposition of ‘place,’ used mainly the verbs ‘be(있다), not be(없다), live(살다), be many(많다)’ after ‘e,’ and ‘i(이) or ka(가)’ was combined mainly in the subject words in case of ‘be(있다), not be(없다)’ or ‘be many(많다),’ and ‘eun(은) or nun(는)’ was combined mainly in the subject words in front of ‘live at’ In addition, 4) they used ‘e’ which indicates ‘cause or reason’ in the form of ‘because( 때문에),’ and 5) used ‘e’ of the subjects as the predicates to match with the predicates such as ‘treat(대하다), like(들다), and catch(걸리다).’ From these results, ‘e’ usage patterns of the Korean novice learners demonstrated very differently by the meaning functions and the learners’ interlanguage regularity could be deducted. However, little difference was found in interlanguage regularity between level 1 and 2. This study has the meaning to try to understand the interlanguage system and regularity in the learners’ acquisition process of postposition ‘e’ and this can be utilized to lessen their errors.

Keywords: interlanguage, interlagnage anaylsis, postposition ‘e’, Korean acquisition

Procedia PDF Downloads 129
25535 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 244
25534 Extreme Temperature Forecast in Mbonge, Cameroon Through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution

Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph

Abstract:

In this paper, temperature extremes are forecast by employing the block maxima method of the generalized extreme value (GEV) distribution to analyse temperature data from the Cameroon Development Corporation (CDC). By considering two sets of data (raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data, while in the simulated data the return values show an increasing trend with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend with an upper bound. This clearly shows that although temperatures in the tropics show a sign of increase in the future, there is a maximum temperature at which there is no exceedance. The results of this paper are very vital in agricultural and environmental research.

Keywords: forecasting, generalized extreme value (GEV), meteorology, return level

Procedia PDF Downloads 478
25533 An Analysis of the Impact of Immunosuppression upon the Prevalence and Risk of Cancer

Authors: Aruha Khan, Brynn E. Kankel, Paraskevi Papadopoulou

Abstract:

In recent years, extensive research upon ‘stress’ has provided insight into its two distinct guises, namely the short–term (fight–or–flight) response versus the long–term (chronic) response. Specifically, the long–term or chronic response is associated with the suppression or dysregulation of immune function. It is also widely noted that the occurrence of cancer is greatly correlated to the suppression of the immune system. It is thus necessary to explore the impact of long–term or chronic stress upon the prevalence and risk of cancer. To what extent can the dysregulation of immune function caused by long–term exposure to stress be controlled or minimized? This study focuses explicitly upon immunosuppression due to its ability to increase disease susceptibility, including cancer itself. Based upon an analysis of the literature relating to the fundamental structure of the immune system alongside the prospective linkage of chronic stress and the development of cancer, immunosuppression may not necessarily correlate directly to the acquisition of cancer—although it remains a contributing factor. A cross-sectional analysis of the survey data from the University of Tennessee Medical Center (UTMC) and Harvard Medical School (HMS) will provide additional supporting evidence (or otherwise) for the hypothesis of the study about whether immunosuppression (caused by the chronic stress response) notably impacts the prevalence of cancer. Finally, a multidimensional framework related to education on chronic stress and its effects is proposed.

Keywords: immune system, immunosuppression, long–term (chronic) stress, risk of cancer

Procedia PDF Downloads 134
25532 Impact of Stack Caches: Locality Awareness and Cost Effectiveness

Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang

Abstract:

Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.

Keywords: hit rate, locality of program, stack cache, stack data

Procedia PDF Downloads 303
25531 Autonomic Threat Avoidance and Self-Healing in Database Management System

Authors: Wajahat Munir, Muhammad Haseeb, Adeel Anjum, Basit Raza, Ahmad Kamran Malik

Abstract:

Databases are the key components of the software systems. Due to the exponential growth of data, it is the concern that the data should be accurate and available. The data in databases is vulnerable to internal and external threats, especially when it contains sensitive data like medical or military applications. Whenever the data is changed by malicious intent, data analysis result may lead to disastrous decisions. Autonomic self-healing is molded toward computer system after inspiring from the autonomic system of human body. In order to guarantee the accuracy and availability of data, we propose a technique which on a priority basis, tries to avoid any malicious transaction from execution and in case a malicious transaction affects the system, it heals the system in an isolated mode in such a way that the availability of system would not be compromised. Using this autonomic system, the management cost and time of DBAs can be minimized. In the end, we test our model and present the findings.

Keywords: autonomic computing, self-healing, threat avoidance, security

Procedia PDF Downloads 504
25530 Information Extraction Based on Search Engine Results

Authors: Mohammed R. Elkobaisi, Abdelsalam Maatuk

Abstract:

The search engines are the large scale information retrieval tools from the Web that are currently freely available to all. This paper explains how to convert the raw resulted number of search engines into useful information. This represents a new method for data gathering comparing with traditional methods. When a query is submitted for a multiple numbers of keywords, this take a long time and effort, hence we develop a user interface program to automatic search by taking multi-keywords at the same time and leave this program to collect wanted data automatically. The collected raw data is processed using mathematical and statistical theories to eliminate unwanted data and converting it to usable data.

Keywords: search engines, information extraction, agent system

Procedia PDF Downloads 430
25529 A Modelling Study to Compare the Storm Surge along Oman Coast Due to Ashobaa and Nanauk Cyclones

Authors: R. V. Suresh Reddi, Vishnu S. Das, Mathew Leslie

Abstract:

The weather systems within the Arabian Sea is very dynamic in terms of monsoon and cyclone events. The storms generated in the Arabian Sea are more likely to progress in the north-west or west direction towards Oman. From the database of Joint Typhoon Warning Center (JTWC), the number of cyclones that hit the Oman coast or pass within close vicinity is noteworthy and therefore they must be considered when looking at coastal/port engineering design and development projects. This paper provides a case study of two cyclones, i.e., Nanauk (2014) and Ashobaa (2015) to assess the impact on storm surge off the Oman coast. These two cyclones have been selected since they are comparable in terms of maximum wind, cyclone duration, central pressure and month of occurrence. They are of similar strength but differ in track, allowing the impact of proximity to the coast to be considered. Of the two selected cyclones, Ashobaa is the 'extreme' case with close proximity while Nanauk remains further offshore and is considered as a more typical case. The available 'best-track' data from JTWC is obtained for the 2 selected cyclones, and the cyclone winds are generated using a 'Cyclone Wind Generation Tool' from MIKE (modelling software) from DHI (Danish Hydraulic Institute). Using MIKE 21 Hydrodynamic model powered by DHI the storm surge is estimated at selected offshore locations along the Oman coast.

Keywords: costal engineering, cyclone, storm surge, modelling

Procedia PDF Downloads 145
25528 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography

Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya

Abstract:

In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.

Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography

Procedia PDF Downloads 290
25527 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India

Authors: Anushtha Saxena

Abstract:

This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.

Keywords: data monetization, e-commerce companies, regulatory framework, GDPR

Procedia PDF Downloads 120
25526 Research on the Overall Protection of Historical Cities Based on the 'City Image' in Ancient Maps: Take the Ancient City of Shipu, Zhejiang, China as an Example

Authors: Xiaoya Yi, Yi He, Zhao Lu, Yang Zhang

Abstract:

In the process of rapid urbanization, many historical cities have undergone excessive demolition and construction under the protection and renewal mechanism. The original pattern of the city has been changed, the urban context has been cut off, and historical features have gradually been lost. The historical city gradually changed into the form of decentralization and fragmentation. The understanding of the ancient city includes two levels. The first one refers to the ancient city on the physical space, which defined an ancient city by its historic walls. The second refers to the public perception of the image, which is derived from people's spatial identification of the ancient city. In ancient China, people draw maps to show their way of understanding the city. Starting from ancient maps and exploring the spatial characteristics of traditional Chinese cities from the perspective of urban imagery is a key clue to understanding the spatial characteristics of historical cities on an overall level. The spatial characteristics of the urban image presented by the ancient map are summarized into two levels by typology. The first is the spatial pattern composed of the center, axis and boundary. The second is the space element that contains the city, street, and sign system. Taking the ancient city of Shipu as a typical case, the "city image" in the ancient map is analyzed as a prototype, and it is projected into the current urban space. The research found that after a long period of evolution, the historical spatial pattern of the ancient city has changed from “dominant” to “recessive control”, and the historical spatial elements are non-centralized and fragmented. The wall that serves as the boundary of the ancient city is transformed into “fragmentary remains”, the streets and lanes that serve as the axis of the ancient city are transformed into “structural remains”, and the symbols of the ancient city center are transformed into “site remains”. Based on this, the paper proposed the methods of controlling the protection of land boundaries, the protecting of the streets and lanes, and the selective restoring of the city wall system and the sign system by accurate assessment. In addition, this paper emphasizes the continuity of the ancient city's traditional spatial pattern and attempts to explore a holistic conservation method of the ancient city in the modern context.

Keywords: ancient city protection, ancient maps, Shipu ancient city, urban intention

Procedia PDF Downloads 128
25525 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 199
25524 Transforming Healthcare Data Privacy: Integrating Blockchain with Zero-Knowledge Proofs and Cryptographic Security

Authors: Kenneth Harper

Abstract:

Blockchain technology presents solutions for managing healthcare data, addressing critical challenges in privacy, integrity, and access. This paper explores how privacy-preserving technologies, such as zero-knowledge proofs (ZKPs) and homomorphic encryption (HE), enhance decentralized healthcare platforms by enabling secure computations and patient data protection. An examination of the mathematical foundations of these methods, their practical applications, and how they meet the evolving demands of healthcare data security is unveiled. Using real-world examples, this research highlights industry-leading implementations and offers a roadmap for future applications in secure, decentralized healthcare ecosystems.

Keywords: blockchain, cryptography, data privacy, decentralized data management, differential privacy, healthcare, healthcare data security, homomorphic encryption, privacy-preserving technologies, secure computations, zero-knowledge proofs

Procedia PDF Downloads 19
25523 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads

Authors: Dražen Cvitanić, Biljana Maljković

Abstract:

This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.

Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency

Procedia PDF Downloads 452