Search results for: minimum data set
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26530

Search results for: minimum data set

24370 Determination of Safety Distance Around Gas Pipelines Using Numerical Methods

Authors: Omid Adibi, Nategheh Najafpour, Bijan Farhanieh, Hossein Afshin

Abstract:

Energy transmission pipelines are one of the most vital parts of each country which several strict laws have been conducted to enhance the safety of these lines and their vicinity. One of these laws is the safety distance around high pressure gas pipelines. Safety distance refers to the minimum distance from the pipeline where people and equipment do not confront with serious damages. In the present study, safety distance around high pressure gas transmission pipelines were determined by using numerical methods. For this purpose, gas leakages from cracked pipeline and created jet fires were simulated as continuous ignition, three dimensional, unsteady and turbulent cases. Numerical simulations were based on finite volume method and turbulence of flow was considered using k-ω SST model. Also, the combustion of natural gas and air mixture was applied using the eddy dissipation method. The results show that, due to the high pressure difference between pipeline and environment, flow chocks in the cracked area and velocity of the exhausted gas reaches to sound speed. Also, analysis of the incident radiation results shows that safety distances around 42 inches high pressure natural gas pipeline based on 5 and 15 kW/m2 criteria are 205 and 272 meters, respectively.

Keywords: gas pipelines, incident radiation, numerical simulation, safety distance

Procedia PDF Downloads 332
24369 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network

Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour

Abstract:

Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.

Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network

Procedia PDF Downloads 169
24368 Total Chromatic Number of Δ-Claw-Free 3-Degenerated Graphs

Authors: Wongsakorn Charoenpanitseri

Abstract:

The total chromatic number χ"(G) of a graph G is the minimum number of colors needed to color the elements (vertices and edges) of G such that no incident or adjacent pair of elements receive the same color Let G be a graph with maximum degree Δ(G). Considering a total coloring of G and focusing on a vertex with maximum degree. A vertex with maximum degree needs a color and all Δ(G) edges incident to this vertex need more Δ(G) + 1 distinct colors. To color all vertices and all edges of G, it requires at least Δ(G) + 1 colors. That is, χ"(G) is at least Δ(G) + 1. However, no one can find a graph G with the total chromatic number which is greater than Δ(G) + 2. The Total Coloring Conjecture states that for every graph G, χ"(G) is at most Δ(G) + 2. In this paper, we prove that the Total Coloring Conjectur for a Δ-claw-free 3-degenerated graph. That is, we prove that the total chromatic number of every Δ-claw-free 3-degenerated graph is at most Δ(G) + 2.

Keywords: total colorings, the total chromatic number, 3-degenerated, CLAW-FREE

Procedia PDF Downloads 175
24367 Welding Process Selection for Storage Tank by Integrated Data Envelopment Analysis and Fuzzy Credibility Constrained Programming Approach

Authors: Rahmad Wisnu Wardana, Eakachai Warinsiriruk, Sutep Joy-A-Ka

Abstract:

Selecting the most suitable welding process usually depends on experiences or common application in similar companies. However, this approach generally ignores many criteria that can be affecting the suitable welding process selection. Therefore, knowledge automation through knowledge-based systems will significantly improve the decision-making process. The aims of this research propose integrated data envelopment analysis (DEA) and fuzzy credibility constrained programming approach for identifying the best welding process for stainless steel storage tank in the food and beverage industry. The proposed approach uses fuzzy concept and credibility measure to deal with uncertain data from experts' judgment. Furthermore, 12 parameters are used to determine the most appropriate welding processes among six competitive welding processes.

Keywords: welding process selection, data envelopment analysis, fuzzy credibility constrained programming, storage tank

Procedia PDF Downloads 167
24366 Optimizing Power in Sequential Circuits by Reducing Leakage Current Using Enhanced Multi Threshold CMOS

Authors: Patikineti Sreenivasulu, K. srinivasa Rao, A. Vinaya Babu

Abstract:

The demand for portability, performance and high functional integration density of digital devices leads to the scaling of complementary metal oxide semiconductor (CMOS) devices inevitable. The increase in power consumption, coupled with the increasing demand for portable/hand-held electronics, has made power consumption a dominant concern in the design of VLSI circuits today. MTCMOS technology provides low leakage and high performance operation by utilizing high speed, low Vt (LVT) transistors for logic cells and low leakage, high Vt (HVT) devices as sleep transistors. Sleep transistors disconnect logic cells from the supply and/or ground to reduce the leakage in the sleep mode. In this technology, energy consumption while doing the mode transition and minimum time required to turn ON the circuit upon receiving the wake up signal are issues to be considered because these can adversely impact the performance of VLSI circuit. In this paper we are introducing an enhancing method of MTCMOS technology to optimize the power in MTCMOS sequential circuits.

Keywords: power consumption, ultra-low power, leakage, sub threshold, MTCMOS

Procedia PDF Downloads 407
24365 On the Estimation of Crime Rate in the Southwest of Nigeria: Principal Component Analysis Approach

Authors: Kayode Balogun, Femi Ayoola

Abstract:

Crime is at alarming rate in this part of world and there are many factors that are contributing to this antisocietal behaviour both among the youths and old. In this work, principal component analysis (PCA) was used as a tool to reduce the dimensionality and to really know those variables that were crime prone in the study region. Data were collected on twenty-eight crime variables from National Bureau of Statistics (NBS) databank for a period of fifteen years, while retaining as much of the information as possible. We use PCA in this study to know the number of major variables and contributors to the crime in the Southwest Nigeria. The results of our analysis revealed that there were eight principal variables have been retained using the Scree plot and Loading plot which implies an eight-equation solution will be appropriate for the data. The eight components explained 93.81% of the total variation in the data set. We also found that the highest and commonly committed crimes in the Southwestern Nigeria were: Assault, Grievous Harm and Wounding, theft/stealing, burglary, house breaking, false pretence, unlawful arms possession and breach of public peace.

Keywords: crime rates, data, Southwest Nigeria, principal component analysis, variables

Procedia PDF Downloads 444
24364 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: calibration model, monitoring, quality improvement, feature selection

Procedia PDF Downloads 356
24363 Effect of Time of Planting on Powdery Mildew Development on Cucumber

Authors: H. Parameshwar Naik, Shripad Kulkarni

Abstract:

Powdery mildew is a serious disease among the fungal in high humid areas with varied temperature conditions. In recent days disease becomes very severe due to uncertain weather conditions and unique character of the disease is, it produces white mycelia growth on upper and lower leaf surfaces and in severe conditions it leads to defoliation. Results of the experiment revealed that sowing of crop in the I fortnight (FN) of July recorded the minimum mean disease severity (7.96 %) followed by crop sown in II FN of July (13.19 %) as against the crop sown in II FN of August (41.44 %) and I FN of September (33.78 %) and the I fortnight of October (33.77 %). In the first date of sowing infection started at 45 DAS and progressed till 73 DAS and it was up to 14.66 Percent and in second date of sowing disease progressed up to 22.66 percent and in the third date of sowing, it was up to 59.35 percent. Afterward, the disease started earlier and progressed up to 66.15 percent and in sixth and seventh date of sowing disease progressed up to 43.15 percent and 59.85 percent respectively. Disease progress is very fast after 45 days after sowing and highest disease incidence was noticed at 73 DAS irrespective of dates of sowing. From the results of the present study, it is very clear that disease development will be very high if crop sown in between 1st fortnight of August and the 1st fortnight of September.

Keywords: cucumber, India, Karnataka, powdery mildew

Procedia PDF Downloads 263
24362 Multilevel Gray Scale Image Encryption through 2D Cellular Automata

Authors: Rupali Bhardwaj

Abstract:

Cryptography is the science of using mathematics to encrypt and decrypt data; the data are converted into some other gibberish form, and then the encrypted data are transmitted. The primary purpose of this paper is to provide two levels of security through a two-step process, rather than transmitted the message bits directly, first encrypted it using 2D cellular automata and then scrambled with Arnold Cat Map transformation; it provides an additional layer of protection and reduces the chance of the transmitted message being detected. A comparative analysis on effectiveness of scrambling technique is provided by scrambling degree measurement parameters i.e. Gray Difference Degree (GDD) and Correlation Coefficient.

Keywords: scrambling, cellular automata, Arnold cat map, game of life, gray difference degree, correlation coefficient

Procedia PDF Downloads 377
24361 Survey Based Data Security Evaluation in Pakistan Financial Institutions against Malicious Attacks

Authors: Naveed Ghani, Samreen Javed

Abstract:

In today’s heterogeneous network environment, there is a growing demand for distrust clients to jointly execute secure network to prevent from malicious attacks as the defining task of propagating malicious code is to locate new targets to attack. Residual risk is always there no matter what solutions are implemented or whet so ever security methodology or standards being adapted. Security is the first and crucial phase in the field of Computer Science. The main aim of the Computer Security is gathering of information with secure network. No one need wonder what all that malware is trying to do: It's trying to steal money through data theft, bank transfers, stolen passwords, or swiped identities. From there, with the help of our survey we learn about the importance of white listing, antimalware programs, security patches, log files, honey pots, and more used in banks for financial data protection but there’s also a need of implementing the IPV6 tunneling with Crypto data transformation according to the requirements of new technology to prevent the organization from new Malware attacks and crafting of its own messages and sending them to the target. In this paper the writer has given the idea of implementing IPV6 Tunneling Secessions on private data transmission from financial organizations whose secrecy needed to be safeguarded.

Keywords: network worms, malware infection propagating malicious code, virus, security, VPN

Procedia PDF Downloads 358
24360 Interactive IoT-Blockchain System for Big Data Processing

Authors: Abdallah Al-ZoubI, Mamoun Dmour

Abstract:

The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.

Keywords: IoT devices, blockchain, Ethereum, big data

Procedia PDF Downloads 150
24359 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System

Authors: Abdul-Rahman Al-Ali

Abstract:

As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.

Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances

Procedia PDF Downloads 324
24358 Statistical Analysis of Interferon-γ for the Effectiveness of an Anti-Tuberculous Treatment

Authors: Shishen Xie, Yingda L. Xie

Abstract:

Tuberculosis (TB) is a potentially serious infectious disease that remains a health concern. The Interferon Gamma Release Assay (IGRA) is a blood test to find out if an individual is tuberculous positive or negative. This study applies statistical analysis to the clinical data of interferon-gamma levels of seventy-three subjects who diagnosed pulmonary TB in an anti-tuberculous treatment. Data analysis is performed to determine if there is a significant decline in interferon-gamma levels for the subjects during a period of six months, and to infer if the anti-tuberculous treatment is effective.

Keywords: data analysis, interferon gamma release assay, statistical methods, tuberculosis infection

Procedia PDF Downloads 306
24357 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 142
24356 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population

Authors: Ye Xue, Zhenhua Deng

Abstract:

Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.

Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool

Procedia PDF Downloads 58
24355 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 147
24354 Study the Effect of Tolerances for Press Tool Assembly: Computer Aided Tolerance Analysis

Authors: Subodh Kumar, Ramkisan Pawar, Gopal D. Belurkar

Abstract:

This paper describes a study for simple blanking tool. In blanking or piercing operation, punch and die should be concentric for proper cutting. In this study, tolerance analysis method is used to analyze the variation in the press tool assembly. Variation results into the eccentricity in between die and punch due to cumulative tolerance of parts used in assembly. 1D variation analysis were performed by CREO parametric computer aided design (CAD) Software Powered by CETOL 6σ computer aided tolerance analysis software. Use of CAD analysis software given the opportunity to find out the cause of variation in tool assembly. Accordingly, the new specification of tolerance and process setting for die set manufacturing has determined. Tolerance allocation and tolerance analysis method were performed iteratively to conclude that position tolerance as well as size tolerance of hole in top plate for bush and size tolerance of guide pillar were more responsible for eccentricity in punch and die. This work proposes optimum tolerance for press tool assembly parts to achieve 100 % yield for specified .015mm minimum tolerance zone.

Keywords: blanking, GD&T (Geometric Dimension and Tolerancing), DPMU (defects per million unit), press tool, stackup analysis, tolerance allocation, yield percentage

Procedia PDF Downloads 361
24353 Privacy-Preserving Model for Social Network Sites to Prevent Unwanted Information Diffusion

Authors: Sanaz Kavianpour, Zuraini Ismail, Bharanidharan Shanmugam

Abstract:

Social Network Sites (SNSs) can be served as an invaluable platform to transfer the information across a large number of individuals. A substantial component of communicating and managing information is to identify which individual will influence others in propagating information and also whether dissemination of information in the absence of social signals about that information will be occurred or not. Classifying the final audience of social data is difficult as controlling the social contexts which transfers among individuals are not completely possible. Hence, undesirable information diffusion to an unauthorized individual on SNSs can threaten individuals’ privacy. This paper highlights the information diffusion in SNSs and moreover it emphasizes the most significant privacy issues to individuals of SNSs. The goal of this paper is to propose a privacy-preserving model that has urgent regards with individuals’ data in order to control availability of data and improve privacy by providing access to the data for an appropriate third parties without compromising the advantages of information sharing through SNSs.

Keywords: anonymization algorithm, classification algorithm, information diffusion, privacy, social network sites

Procedia PDF Downloads 321
24352 Transport Properties of Alkali Nitrites

Authors: Y. Mateyshina, A.Ulihin, N.Uvarov

Abstract:

Electrolytes with different type of charge carrier can find widely application in different using, e.g. sensors, electrochemical equipments, batteries and others. One of important components ensuring stable functioning of the equipment is electrolyte. Electrolyte has to be characterized by high conductivity, thermal stability, and wide electrochemical window. In addition to many advantageous characteristic for liquid electrolytes, the solid state electrolytes have good mechanical stability, wide working range of temperature range. Thus search of new system of solid electrolytes with high conductivity is an actual task of solid state chemistry. Families of alkali perchlorates and nitrates have been investigated by us earlier. In literature data about transport properties of alkali nitrites are absent. Nevertheless, alkali nitrites MeNO2 (Me= Li+, Na+, K+, Rb+ and Cs+), except for the lithium salt, have high-temperature phases with crystal structure of the NaCl-type. High-temperature phases of nitrites are orientationally disordered, i.e. non-spherical anions are reoriented over several equivalents directions in the crystal lattice. Pure lithium nitrite LiNO2 is characterized by ionic conductivity near 10-4 S/cm at 180°C and more stable as compared with lithium nitrate and can be used as a component for synthesis of composite electrolytes. In this work composite solid electrolytes in the binary system LiNO2 - A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) were synthesized and their structural, thermodynamic and electrical properties investigated. Alkali nitrite was obtained by exchange reaction from water solutions of barium nitrite and alkali sulfate. The synthesized salt was characterized by X-ray powder diffraction technique using D8 Advance X-Ray Diffractometer with Cu K radiation. Using thermal analysis, the temperatures of dehydration and thermal decomposition of salt were determined.. The conductivity was measured using a two electrode scheme in a forevacuum (6.7 Pa) with an HP 4284A (Precision LCR meter) in a frequency range 20 Hz < ν < 1 MHz. Solid composite electrolytes LiNO2 - A A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) have been synthesized by mixing of preliminary dehydrated components followed by sintering at 250°C. In the series of nitrite of alkaline metals Li+-Cs+, the conductivity varies not monotonically with increasing radius of cation. The minimum conductivity is observed for KNO2; however, with further increase in the radius of cation in the series, the conductivity tends to increase. The work was supported by the Russian Foundation for Basic research, grant #14-03-31442.

Keywords: conductivity, alkali nitrites, composite electrolytes, transport properties

Procedia PDF Downloads 319
24351 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 455
24350 Text Mining of Twitter Data Using a Latent Dirichlet Allocation Topic Model and Sentiment Analysis

Authors: Sidi Yang, Haiyi Zhang

Abstract:

Twitter is a microblogging platform, where millions of users daily share their attitudes, views, and opinions. Using a probabilistic Latent Dirichlet Allocation (LDA) topic model to discern the most popular topics in the Twitter data is an effective way to analyze a large set of tweets to find a set of topics in a computationally efficient manner. Sentiment analysis provides an effective method to show the emotions and sentiments found in each tweet and an efficient way to summarize the results in a manner that is clearly understood. The primary goal of this paper is to explore text mining, extract and analyze useful information from unstructured text using two approaches: LDA topic modelling and sentiment analysis by examining Twitter plain text data in English. These two methods allow people to dig data more effectively and efficiently. LDA topic model and sentiment analysis can also be applied to provide insight views in business and scientific fields.

Keywords: text mining, Twitter, topic model, sentiment analysis

Procedia PDF Downloads 179
24349 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning

Authors: Kwaku Damoah

Abstract:

This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.

Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.

Procedia PDF Downloads 70
24348 Gender Identity: Omani College Students Negotiate Their Cultural Expectations

Authors: Mohammed Alkharusi

Abstract:

This study addresses issues of gender identity faced by female and male Omani students studying at educational higher institutions. The study interviewed 16 male and female students to understand how cultural expectations of gender influence these students’ communication, and as a result how these students negotiate their gender identity to facilitate communication practices (or not) with the opposite sex. The context, focus, and theoretical underpinnings of the study are presented. Given that the researcher is also an Omani Arab, methodological and ethical challenges (e.g., recruiting and engaging with participants, and conducting semi-structured face-to-face interviews) will be discussed reflexively. The analysis found that students continued to following cultural expectations. They kept minimum interaction with the opposite sex that was illustrated by preferring to work with the same sex in group assignments only, avoiding sitting alone with the opposite sex, and not participating in academic activities. In the social context, the students started negotiating their gender identity and adopted communication practices that facilitated their social communication with the opposite sex. For example, they accepted to work with the opposite sex in different social mixed activities. In conclusion, students desired to maintain their cultural expectations but adopted certain communication practices to interact with the opposite sex.

Keywords: communication, cultural expectations, gender, identity, negotiation

Procedia PDF Downloads 389
24347 Optimizing Oil Production through 30-Inch Pipeline in Abu-Attifel Field

Authors: Ahmed Belgasem, Walid Ben Hussin, Emad Krekshi, Jamal Hashad

Abstract:

Waxy crude oil, characterized by its high paraffin wax content, poses significant challenges in the oil & gas industry due to its increased viscosity and semi-solid state at reduced temperatures. The wax formation process, which includes precipitation, crystallization, and deposition, becomes problematic when crude oil temperatures fall below the wax appearance temperature (WAT) or cloud point. Addressing these issues, this paper introduces a technical solution designed to mitigate the wax appearance and enhance the oil production process in Abu-Attifil Field via a 30-inch crude oil pipeline. A comprehensive flow assurance study validates the feasibility and performance of this solution across various production rates, temperatures, and operational scenarios. The study's findings indicate that maintaining the crude oil's temperature above a minimum threshold of 63°C is achievable through the strategic placement of two heating stations along the pipeline route. This approach effectively prevents wax deposition, gelling, and subsequent mobility complications, thereby bolstering the overall efficiency, reliability, safety, and economic viability of the production process. Moreover, this solution significantly curtails the environmental repercussions traditionally associated with wax deposition, which can accumulate up to 7,500kg. The research methodology involves a comprehensive flow assurance study to validate the feasibility and performance of the proposed solution. The study considers various production rates, temperatures, and operational scenarios. It includes crude oil analysis to determine the wax appearance temperature (WAT), as well as the evaluation and comparison of operating options for the heating stations. The study's findings indicate that the proposed solution effectively prevents wax deposition, gelling, and subsequent mobility complications. By maintaining the crude oil's temperature above the specified threshold, the solution improves the overall efficiency, reliability, safety, and economic viability of the oil production process. Additionally, the solution contributes to reducing environmental repercussions associated with wax deposition. The research conclusion presents a technical solution that optimizes oil production in the Abu-Attifil Field by addressing wax formation problems through the strategic placement of two heating stations. The solution effectively prevents wax deposition, improves overall operational efficiency, and contributes to environmental sustainability. Further research is suggested for field data validation and cost-benefit analysis exploration.

Keywords: oil production, wax depositions, solar cells, heating stations

Procedia PDF Downloads 73
24346 Value Chain Based New Business Opportunity

Authors: Seonjae Lee, Sungjoo Lee

Abstract:

Excavation is necessary to remain competitive in the current business environment. The company survived the rapidly changing industry conditions by adapting new business strategy and reducing technology challenges. Traditionally, the two methods are conducted excavations for new businesses. The first method is, qualitative analysis of expert opinion, which is gathered through opportunities and secondly, new technologies are discovered through quantitative data analysis of method patents. The second method increases time and cost. Patent data is restricted for use and the purpose of discovering business opportunities. This study presents the company's characteristics (sector, size, etc.), of new business opportunities in customized form by reviewing the value chain perspective and to contributing to creating new business opportunities in the proposed model. It utilizes the trademark database of the Korean Intellectual Property Office (KIPO) and proprietary company information database of the Korea Enterprise Data (KED). This data is key to discovering new business opportunities with analysis of competitors and advanced business trademarks (Module 1) and trading analysis of competitors found in the KED (Module 2).

Keywords: value chain, trademark, trading analysis, new business opportunity

Procedia PDF Downloads 372
24345 An Efficient Hybrid Approach Based on Multi-Agent System and Emergence Method for the Integration of Systematic Preventive Maintenance Policies

Authors: Abdelhadi Adel, Kadri Ouahab

Abstract:

This paper proposes a hybrid algorithm for the integration of systematic preventive maintenance policies in hybrid flow shop scheduling to minimize makespan. We have implemented a problem-solving approach for optimizing the processing time, methods based on metaheuristics. The proposed approach is inspired by the behavior of the human body. This hybridization is between a multi-agent system and inspirations of the human body, especially genetics. The effectiveness of our approach has been demonstrated repeatedly in this paper. To solve such a complex problem, we proposed an approach which we have used advanced operators such as uniform crossover set and single point mutation. The proposed approach is applied to three preventive maintenance policies. These policies are intended to maximize the availability or to maintain a minimum level of reliability during the production chain. The results show that our algorithm outperforms existing algorithms. We assumed that the machines might be unavailable periodically during the production scheduling.

Keywords: multi-agent systems, emergence, genetic algorithm, makespan, systematic maintenance, scheduling, hybrid flow shop scheduling

Procedia PDF Downloads 336
24344 Towards Addressing the Cultural Snapshot Phenomenon in Cultural Mapping Libraries

Authors: Mousouris Spiridon, Kavakli Evangelia

Abstract:

This paper focuses on Digital Libraries (DLs) that contain and geovisualise cultural data, highlighting the need to define them as a separate category termed Cultural Mapping Libraries, based on their inherent connection of culture with geographic location and their design requirements in support of visual representation of cultural data on the map. An exploratory analysis of DLs that conform to the above definition brought forward the observation that existing Cultural Mapping Libraries fail to geovisualise the entirety of cultural data per point of interest thus resulting in a Cultural Snapshot phenomenon. The existence of this phenomenon was reinforced by the results of a systematic bibliographic research. In order to address the Cultural Snapshot, this paper proposes the use of the Semantic Web principles to efficiently interconnect spatial cultural data through time, per geographic location. In this way points of interest are transformed into scenery where culture evolves over time. This evolution is expressed as occurrences taking place chronologically, in an event oriented approach, a conceptualization also endorsed by the CIDOC Conceptual Reference Model (CIDOC CRM). In particular, we posit the use of CIDOC CRM as the baseline for defining the logic of Cultural Mapping Libraries as part of the Culture Domain in accordance with the Digital Library Reference Model, in order to define the rules of cultural data management by the system. Our future goal is to transform this conceptual definition in to inferencing rules that resolve the Cultural Snapshot and lead to a more complete geovisualisation of cultural data.

Keywords: digital libraries, semantic web, geovisualization, CIDOC-CRM

Procedia PDF Downloads 109
24343 An Evaluation of the Impact of E-Banking on Operational Efficiency of Banks in Nigeria

Authors: Ibrahim Rabiu Darazo

Abstract:

The research has been conducted on the impact of E-banking on the operational efficiency of Banks in Nigeria, A case of some selected banks (Diamond Bank Plc, GTBankPlc, and Fidelity Bank Plc) in Nigeria. The research is a quantitative research which uses both primary and secondary sources of data collection. Questionnaire were used to obtained accurate data, where 150 Questionnaire were distributed among staff and customers of the three Banks , and the data collected where analysed using chi-square, whereas the secondary data where obtained from relevant text books, journals and relevant web sites. It is clear from the findings that, the use of e-banking by the banks has improved the efficiency of these banks, in terms of providing efficient services to customers electronically, using Internet Banking, Telephone Banking ATMs, reducing time taking to serve customers, e-banking allow new customers to open an account online, customers have access to their account at all the time 24/7.E-banking provide access to customers information from the data base and cost of check and postage were eliminated using e-banking. The recommendation at the end of the research include; the Banks should try to update their electronic gadgets, e-fraud(internal & external) should also be controlled, Banks shall employ qualified man power, Biometric ATMs shall be introduce to reduce fraud using ATM Cards, as it is use in other countries like USA.

Keywords: banks, electronic banking, operational efficiency of banks, biometric ATMs

Procedia PDF Downloads 332
24342 The Search of Possibility of Running Six Sigma Process in It Education Center

Authors: Mohammad Amini, Aliakbar Alijarahi

Abstract:

This research that is collected and title as ‘ the search of possibility of running six sigma process in IT education center ‘ goals to test possibility of running the six sigma process and using in IT education center system. This process is a good method that is used for reducing process, errors. To evaluate running off six sigma in the IT education center, some variables relevant to this process is selected. These variables are: - The amount of support from organization master boss to process. - The current specialty. - The ability of training system for compensating reduction. - The amount of match between current culture whit six sigma culture . - The amount of current quality by comparing whit quality gain from running six sigma. For evaluation these variables we select four question and to gain the answers, we set a questionnaire from with 28 question and distribute it in our typical society. Since, our working environment is a very competition, and organization needs to decree the errors to minimum, otherwise it lasts their customers. The questionnaire from is given to 55 persons, they were filled and returned by 50 persons, after analyzing the forms these results is gained: - IT education center needs to use and run this system (six sigma) for improving their process qualities. - The most factors need to run the six sigma exist in the IT education center, but there is a need to support.

Keywords: education, customer, self-action, quality, continuous improvement process

Procedia PDF Downloads 340
24341 Optimize Data Evaluation Metrics for Fraud Detection Using Machine Learning

Authors: Jennifer Leach, Umashanger Thayasivam

Abstract:

The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, though, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate people. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease this advancement. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent data, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which testing split and technique would lead to the most optimal results.

Keywords: data science, fraud detection, machine learning, supervised learning

Procedia PDF Downloads 196