Search results for: solar–climatic data
22876 A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering
Authors: Takanori Tanaka, Daisuke Kitao, Daisuke Ikeda
Abstract:
The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras.Keywords: data-intensive science, image classification, content-based image retrieval, aurora
Procedia PDF Downloads 44922875 Effect of Diamagnetic Additives on Defects Level of Soft LiTiZn Ferrite Ceramics
Authors: Andrey V. Malyshev, Anna B. Petrova, Anatoly P. Surzhikov
Abstract:
The article presents the results of the influence of diamagnetic additives on the defects level of ferrite ceramics. For this purpose, we use a previously developed method based on the mathematical analysis of experimental temperature dependences of the initial permeability. A phenomenological expression for the description of such dependence was suggested and an interpretation of its main parameters was given. It was shown, that the main criterion of the integral defects level of ferrite ceramics is the relation of two parameters correlating with elastic stress value in a material. Model samples containing a controlled number of intergranular phase inclusions served to prove the validity of the proposed method, as well as to assess its sensitivity in comparison with the traditional XRD (X-ray diffraction) analysis. The broadening data of diffraction reflexes of model samples have served for such comparison. The defects level data obtained by the proposed method are in good agreement with the X-ray data. The method showed high sensitivity. Therefore, the legitimacy of the selection relationship β/α parameters of phenomenological expression as a characteristic of the elastic state of the ferrite ceramics confirmed. In addition, the obtained data can be used in the detection of non-magnetic phases and testing the optimal sintering production technology of soft magnetic ferrites.Keywords: cure point, initial permeability, integral defects level, homogeneity
Procedia PDF Downloads 13422874 TAXAPRO, A Streamlined Pipeline to Analyze Shotgun Metagenomes
Authors: Sofia Sehli, Zainab El Ouafi, Casey Eddington, Soumaya Jbara, Kasambula Arthur Shem, Islam El Jaddaoui, Ayorinde Afolayan, Olaitan I. Awe, Allissa Dillman, Hassan Ghazal
Abstract:
The ability to promptly sequence whole genomes at a relatively low cost has revolutionized the way we study the microbiome. Microbiologists are no longer limited to studying what can be grown in a laboratory and instead are given the opportunity to rapidly identify the makeup of microbial communities in a wide variety of environments. Analyzing whole genome sequencing (WGS) data is a complex process that involves multiple moving parts and might be rather unintuitive for scientists that don’t typically work with this type of data. Thus, to help lower the barrier for less-computationally inclined individuals, TAXAPRO was developed at the first Omics Codeathon held virtually by the African Society for Bioinformatics and Computational Biology (ASBCB) in June 2021. TAXAPRO is an advanced metagenomics pipeline that accurately assembles organelle genomes from whole-genome sequencing data. TAXAPRO seamlessly combines WGS analysis tools to create a pipeline that automatically processes raw WGS data and presents organism abundance information in both a tabular and graphical format. TAXAPRO was evaluated using COVID-19 patient gut microbiome data. Analysis performed by TAXAPRO demonstrated a high abundance of Clostridia and Bacteroidia genera and a low abundance of Proteobacteria genera relative to others in the gut microbiome of patients hospitalized with COVID-19, consistent with the original findings derived using a different analysis methodology. This provides crucial evidence that the TAXAPRO workflow dispenses reliable organism abundance information overnight without the hassle of performing the analysis manually.Keywords: metagenomics, shotgun metagenomic sequence analysis, COVID-19, pipeline, bioinformatics
Procedia PDF Downloads 22122873 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics
Authors: Varun K, Pramod B. Balareddy
Abstract:
Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient
Procedia PDF Downloads 25722872 Beliefs on Reproduction of Women in Fish Port Community: An Explorative Study on the Beliefs on Conception, Childbirth, and Maternal Care of Women in Navotas Fish Port Community
Authors: Marie Kristel A. Gabawa
Abstract:
The accessibility of health programs, specifically family planning programs and maternal and child health care (FP/MCH), are generally low in urban poor communities. Moreover, most of FP/MCH programs are directed toward medical terms that are usually not included in ideation of the body of urban poor dwellers. This study aims to explore the beliefs on reproduction that will encompass, but not limited to, beliefs on conception, pregnancy, and maternal and child health care. The site of study will be the 2 barangays of North Bay Boulevard South 1 (NBBS1) and North Bay Boulevard South 2 (NBBS2). These 2 barangays are the nearest residential community within the Navotas Fish Port Complex (NFPC). Data gathered will be analyzed using grounded-theory method of analysis, with the theories of cultural materialism and equity feminism as foundation. Survey questionnaires, key informant interviews, and focus group discussions will be utilized in gathering data. Further, the presentation of data will be recommended to health program initiators and use the data gathered as a tool to customize FP/MCH programs to the perception and beliefs of women residing in NBBS1and NBBS2, and to aid any misinformation for FP/MCH techniques.Keywords: beliefs on reproduction, fish port community, family planning, maternal and child health care, Navotas
Procedia PDF Downloads 25922871 Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach
Authors: Adrian O'Hagan, Robert McLoughlin
Abstract:
Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.Keywords: empirical copula, extreme events, insurance loss reserving, upper tail dependence coefficient
Procedia PDF Downloads 28422870 Forest Degradation and Implications for Rural Livelihood in Kaimur Reserve Forest of Bihar, India
Authors: Shashi Bhushan, Sucharita Sen
Abstract:
In India, forest and people are inextricably linked since millions of people live adjacent to or within protected areas and harvest forest products. Indian forest has their own legacy to sustain by its own climatic nature with several social, economic and cultural activities. People surrounding forest areas are not only dependent on this resource for their livelihoods but also for the other source, like religious ceremonies, social customs and herbal medicines, which are determined by the forest like agricultural land, groundwater level, and soil fertility. The assumption that fuelwood and fodder extraction, which is the part of local livelihood leads to deforestation, has so far been the dominant mainstream views in deforestation discourses. Given the occupational division across social groups in Kaimur reserve forest, the differential nature of dependence of forest resources is important to understand. This paper attempts to assess the nature of dependence and impact of forest degradation on rural households across various social groups. Also, an additional element that is added to the enquiry is the way degradation of forests leading to scarcity of forest-based resources impacts the patterns of dependence across various social groups. Change in forest area calculated through land use land cover analysis using remote sensing technique and examination of different economic activities carried out by the households that are forest-based was collected by primary survey in Kaimur reserve forest of state of Bihar in India. The general finding indicates that the Scheduled Tribe and Scheduled Caste communities, the most socially and economically deprived sections of the rural society are involved in a significant way in collection of fuelwood, fodder, and fruits, both for self-consumption and sale in the market while other groups of society uses fuelwood, fruit, and fodder for self-use only. Depending on the local forest resources for fuelwood consumption was the primary need for all social groups due to easy accessibility and lack of alternative energy source. In last four decades, degradation of forest made a direct impact on rural community mediated through the socio-economic structure, resulting in a shift from forest-based occupations to cultivation and manual labour in agricultural and non-agricultural activities. Thus there is a need to review the policies with respect to the ‘community forest management’ since this study clearly throws up the fact that engagement with and dependence on forest resources is socially differentiated. Thus tying the degree of dependence and forest management becomes extremely important from the view of ‘sustainable’ forest resource management. The statization of forest resources also has to keep in view the intrinsic way in which the forest-dependent population interacts with the forest.Keywords: forest degradation, livelihood, social groups, tribal community
Procedia PDF Downloads 16922869 Blockchain in Saudi E-Government: A Systematic Literature Review
Authors: Haitham Assiri, Priyadarsi Nanda
Abstract:
The world is gradually entering the fourth industrial revolution. E-Government services are scaling government operations across the globe. However, as promising as an e-Government system would be, it is also susceptible to malicious attacks if not properly secured. This study found out that, in Saudi Arabia, the e-Government website, Yesser is vulnerable to external attacks. Obviously, this can lead to a breach of data integrity and privacy. In this paper, a Systematic Literature Review was conducted to explore possible ways the Kingdom of Saudi Arabia can take necessary measures to strengthen its e-Government system using Blockchain. Blockchain is one of the emerging technologies shaping the world through its applications in finance, elections, healthcare, etc. It secures systems and brings more transparency. A total of 28 papers were selected for this SLR, and 19 of the papers significantly showed that blockchain could enhance the security and privacy of Saudi’s e-government system. Other papers also concluded that blockchain is effective, albeit with the integration of other technologies like IoT, AI and big data. These papers have been analysed to sieve out the findings and set the stage for future research into the subject.Keywords: blockchain, data integrity, e-government, security threats
Procedia PDF Downloads 25022868 Geospatial Information for Smart City Development
Authors: Simangele Dlamini
Abstract:
Smart city development is seen as a way of facing the challenges brought about by the growing urban population the world over. Research indicates that cities have a role to play in combating urban challenges like crime, waste disposal, greenhouse gas emissions, and resource efficiency. These solutions should be such that they do not make city management less sustainable but should be solutions-driven, cost and resource-efficient, and smart. This study explores opportunities on how the City of Johannesburg, South Africa, can use Geographic Information Systems, Big Data and the Internet of Things (IoT) in identifying opportune areas to initiate smart city initiatives such as smart safety, smart utilities, smart mobility, and smart infrastructure in an integrated manner. The study will combine Big Data, using real-time data sources to identify hotspot areas that will benefit from ICT interventions. The GIS intervention will assist the city in avoiding a silo approach in its smart city development initiatives, an approach that has led to the failure of smart city development in other countries.Keywords: smart cities, internet of things, geographic information systems, johannesburg
Procedia PDF Downloads 14822867 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach
Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika
Abstract:
Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments
Procedia PDF Downloads 21922866 A Coupling Study of Public Service Facilities and Land Price Based on Big Data Perspective in Wuxi City
Authors: Sisi Xia, Dezhuan Tao, Junyan Yang, Weiting Xiong
Abstract:
Under the background of Chinese urbanization changing from incremental development to stock development, the completion of urban public service facilities is essential to urban spatial quality. As public services facilities is a huge and complicated system, clarifying the various types of internal rules associated with the land market price is key to optimizing spatial layout. This paper takes Wuxi City as a representative sample location and establishes the digital analysis platform using urban price and several high-precision big data acquisition methods. On this basis, it analyzes the coupling relationship between different public service categories and land price, summarizing the coupling patterns of urban public facilities distribution and urban land price fluctuations. Finally, the internal mechanism within each of the two elements is explored, providing the reference of the optimum layout of urban planning and public service facilities.Keywords: public service facilities, land price, urban spatial morphology, big data
Procedia PDF Downloads 21522865 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 21522864 Inkjet Printed Silver Nanowire Network as Semi-Transparent Electrode for Organic Photovoltaic Devices
Authors: Donia Fredj, Marie Parmentier, Florence Archet, Olivier Margeat, Sadok Ben Dkhil, Jorg Ackerman
Abstract:
Transparent conductive electrodes (TCEs) or transparent electrodes (TEs) are a crucial part of many electronic and optoelectronic devices such as touch panels, liquid crystal displays (LCDs), organic light-emitting diodes (OLEDs), solar cells, and transparent heaters. The indium tin oxide (ITO) electrode is the most widely utilized transparent electrode due to its excellent optoelectrical properties. However, the drawbacks of ITO, such as the high cost of this material, scarcity of indium, and the fragile nature, limit the application in large-scale flexible electronic devices. Importantly, flexibility is becoming more and more attractive since flexible electrodes have the potential to open new applications which require transparent electrodes to be flexible, cheap, and compatible with large-scale manufacturing methods. So far, several materials as alternatives to ITO have been developed, including metal nanowires, conjugated polymers, carbon nanotubes, graphene, etc., which have been extensively investigated for use as flexible and low-cost electrodes. Among them, silver nanowires (AgNW) are one of the promising alternatives to ITO thanks to their excellent properties, high electrical conductivity as well as desirable light transmittance. In recent years, inkjet printing became a promising technique for large-scale printed flexible and stretchable electronics. However, inkjet printing of AgNWs still presents many challenges. In this study, a synthesis of stable AgNW that could compete with ITO was developed. This material was printed by inkjet technology directly on a flexible substrate. Additionally, we analyzed the surface microstructure, optical and electrical properties of the printed AgNW layers. Our further research focused on the study of all inkjet-printed organic modules with high efficiency.Keywords: transparent electrodes, silver nanowires, inkjet printing, formulation of stable inks
Procedia PDF Downloads 22222863 Carbon, Nitrogen Doped TiO2 Macro/Mesoporous Monoliths with High Visible Light Absorption for Photocatalytic Wastewater Treatment
Authors: Paolo Boscaro, Vasile Hulea, François Fajula, Francis Luck, Anne Galarneau
Abstract:
TiO2 based monoliths with hierarchical macropores and mesopores have been synthesized following a novel one pot sol-gel synthesis method. Taking advantage of spinodal separation that occurs between titanium isopropoxide and an acidic solution in presence of polyethylene oxide polymer, monoliths with homogeneous interconnected macropres of 3 μm in diameter and mesopores of ca. 6 nm (surface area 150 m2/g) are obtained. Furthermore, these monoliths present some carbon and nitrogen (as shown by XPS and elemental analysis), which considerably reduce titanium oxide energy gap and enable light to be absorbed up to 700 nm wavelength. XRD shows that anatase is the dominant phase with a small amount of brookite. Enhanced light absorption and high porosity of the monoliths are responsible for a remarkable photocatalytic activity. Wastewater treatment has been performed in closed reactor under sunlight using orange G dye as target molecule. Glass reactors guarantee that most of UV radiations (to almost 300 nm) of solar spectrum are excluded. TiO2 nanoparticles P25 (usually used in photocatalysis under UV) and un-doped TiO2 monoliths with similar porosity were used as comparison. C,N-doped TiO2 monolith allowed a complete colorant degradation in less than 1 hour, whereas 10 h are necessary for 40% colorant degradation with P25 and un-doped monolith. Experiment performed in the dark shows that only 3% of molecules have been adsorbed in the C,N-doped TiO2 monolith within 1 hour. The much higher efficiency of C,N-doped TiO2 monolith in comparison to P25 and un-doped monolith, proves that doping TiO2 is an essential issue and that nitrogen and carbon are effective dopants. Monoliths offer multiples advantages in respect to nanometric powders: sample can be easily removed from batch (no needs to filter or to centrifuge). Moreover flow reactions can be set up with cylindrical or flat monoliths by simple sheathing or by locking them with O-rings.Keywords: C-N doped, sunlight photocatalytic activity, TiO2 monolith, visible absorbance
Procedia PDF Downloads 23122862 Deployed Confidence: The Testing in Production
Authors: Shreya Asthana
Abstract:
Testers know that the feature they tested on stage is working perfectly in production only after release went live. Sometimes something breaks in production and testers get to know through the end user’s bug raised. The panic mode starts when your staging test results do not reflect current production behavior. And you started doubting your testing skills when finally the user reported a bug to you. Testers can deploy their confidence on release day by testing on production. Once you start doing testing in production, you will see test result accuracy because it will be running on real time data and execution will be a little faster as compared to staging one due to elimination of bad data. Feature flagging, canary releases, and data cleanup can help to achieve this technique of testing. By this paper it will be easier to understand the steps to achieve production testing before making your feature live, and to modify IT company’s testing procedure, so testers can provide the bug free experience to the end users. This study is beneficial because too many people think that testing should be done in staging but not in production and now this is high time to pull out people from their old mindset of testing into a new testing world. At the end of the day, it all just matters if the features are working in production or not.Keywords: bug free production, new testing mindset, testing strategy, testing approach
Procedia PDF Downloads 7722861 Recent Advances in the Valorization of Goat Milk: Nutritional Properties and Production Sustainability
Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana
Abstract:
Goat dairy products are gaining popularity worldwide. In developing countries, but also in many marginal regions of the Mediterranean area, goats represent a great part of the economy and ensure food security. In fact, these small ruminants are able to convert efficiently poor weedy plants and small trees into traditional products of high nutritional quality, showing great resilience to different climatic and environmental conditions. In developed countries, goat milk is appreciated for the presence of health-promoting compounds, bioactive compounds such as conjugated linoleic acids, oligosaccharides, sphingolipids and polyammines. This paper focuses on the recent advances in literature on the nutritional properties of goat milk and on innovative techniques to improve its quality as to become a promising functional food. The environmental sustainability of different methodologies of production has also been examined. Goat milk is valued today as a food of high nutritional value and functional properties as well as small environmental footprint. It is widely consumed in many countries due to high nutritional value, lower allergenic potential, and better digestibility when compared to bovine milk, that makes this product suitable for infants, elderly or sensitive patients. The main differences in chemical composition between a cow and goat milk rely on fat globules that in goat milk are smaller and in fatty acids that present a smaller chain length, while protein, fat, and lactose concentration are comparable. Milk nutritional properties have demonstrated to be strongly influenced by animal diet, genotype, and welfare, but also by season and production systems. Furthermore, there is a growing interest in the dairy industry in goat milk for its relatively high concentration of prebiotics and a good amount of probiotics, which have recently gained importance for their therapeutic potential. Therefore, goat milk is studied as a promising matrix to develop innovative functional foods. In addition to the economic and nutritional value, goat milk is considered a sustainable product for its small environmental footprint, as they require relatively little water and land, and less medical treatments, compared to cow, these characteristics make its production naturally vocated to organic farming. Organic goat milk production has becoming more and more interesting both for farmers and consumers as it can answer to several concerns like environment protection, animal welfare and economical sustainment of rural populations living in marginal lands. These evidences make goat milk an ancient food with novel properties and advantages to be valorized and exploited.Keywords: goat milk, nutritional quality, bioactive compounds, sustainable production, animal welfare
Procedia PDF Downloads 14922860 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 14122859 Evaluation of Hydrocarbon Prospects of 'ADE' Field, Niger Delta
Authors: Oluseun A. Sanuade, Sanlinn I. Kaka, Adesoji O. Akanji, Olukole A. Akinbiyi
Abstract:
Prospect evaluation of ‘the ‘ADE’ field was done using 3D seismic data and well log data. The field is located in the offshore Niger Delta where water depth ranges from 450 to 800 m. The objectives of this study are to explore deeper prospects and to ascertain the kind of traps that are favorable for the accumulation of hydrocarbon in the field. Six horizons with major and minor faults were identified and mapped in the field. Time structure maps of these horizons were generated and using the available check-shot data the maps were converted to top structure maps which were used to calculate the hydrocarbon volume. The results show that regional structural highs that are trending in northeast-southwest (NE-SW) characterized a large portion of the field. These highs were observed across all horizons revealing a regional post-depositional deformation. Three prospects were identified and evaluated to understand the different opportunities in the field. These include stratigraphic pinch out and bi-directional downlap. The results of this study show that the field has potentials for new opportunities that could be explored for further studies.Keywords: hydrocarbon, play, prospect, stratigraphy
Procedia PDF Downloads 26922858 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation
Authors: Abdal-Hafeez Alhussein
Abstract:
Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.Keywords: artificial intelligence, information technology, automation, scalability
Procedia PDF Downloads 1722857 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network
Authors: Sandesh Achar
Abstract:
Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.
Procedia PDF Downloads 4422856 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform
Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail
Abstract:
The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring
Procedia PDF Downloads 7822855 Predictive Modeling of Bridge Conditions Using Random Forest
Authors: Miral Selim, May Haggag, Ibrahim Abotaleb
Abstract:
The aging of transportation infrastructure presents significant challenges, particularly concerning the monitoring and maintenance of bridges. This study investigates the application of Random Forest algorithms for predictive modeling of bridge conditions, utilizing data from the US National Bridge Inventory (NBI). The research is significant as it aims to improve bridge management through data-driven insights that can enhance maintenance strategies and contribute to overall safety. Random Forest is chosen for its robustness, ability to handle complex, non-linear relationships among variables, and its effectiveness in feature importance evaluation. The study begins with comprehensive data collection and cleaning, followed by the identification of key variables influencing bridge condition ratings, including age, construction materials, environmental factors, and maintenance history. Random Forest is utilized to examine the relationships between these variables and the predicted bridge conditions. The dataset is divided into training and testing subsets to evaluate the model's performance. The findings demonstrate that the Random Forest model effectively enhances the understanding of factors affecting bridge conditions. By identifying bridges at greater risk of deterioration, the model facilitates proactive maintenance strategies, which can help avoid costly repairs and minimize service disruptions. Additionally, this research underscores the value of data-driven decision-making, enabling better resource allocation to prioritize maintenance efforts where they are most necessary. In summary, this study highlights the efficiency and applicability of Random Forest in predictive modeling for bridge management. Ultimately, these findings pave the way for more resilient and proactive management of bridge systems, ensuring their longevity and reliability for future use.Keywords: data analysis, random forest, predictive modeling, bridge management
Procedia PDF Downloads 2222854 Validity and Reliability of Competency Assessment Implementation (CAI) Instrument Using Rasch Model
Authors: Nurfirdawati Muhamad Hanafi, Azmanirah Ab Rahman, Marina Ibrahim Mukhtar, Jamil Ahmad, Sarebah Warman
Abstract:
This study was conducted to generate empirical evidence on validity and reliability of the item of Competency Assessment Implementation (CAI) Instrument using Rasch Model for polythomous data aided by Winstep software version 3.68. The construct validity was examined by analyzing the point-measure correlation index (PTMEA), in fit and outfit MNSQ values; meanwhile the reliability was examined by analyzing item reliability index. A survey technique was used as the major method with the CAI instrument on 156 teachers from vocational schools. The results have shown that the reliability of CAI Instrument items were between 0.80 and 0.98. PTMEA Correlation is in positive values, in which the item is able to distinguish between the ability of the respondent. Statistical data obtained shows that out of 154 items, 12 items from the instrument suggested to be omitted. This study is hoped could bring a new direction to the process of data analysis in educational research.Keywords: competency assessment, reliability, validity, item analysis
Procedia PDF Downloads 44522853 Electronic Equipment Failure due to Corrosion
Authors: Yousaf Tariq
Abstract:
There are many reasons which are involved in electronic equipment failure i.e. temperature, humidity, dust, smoke etc. Corrosive gases are also one of the factor which may involve in failure of equipment. Sensitivity of electronic equipment increased when “lead-free” regulation enforced on manufacturers. In data center, equipment like hard disk, servers, printed circuit boards etc. have been exposed to gaseous contamination due to increase in sensitivity. There is a worldwide standard to protect electronic industrial electronic from corrosive gases. It is well known as “ANSI/ISA S71.04 – 1985 - Environmental Conditions for Control Systems: Airborne Contaminants. ASHRAE Technical Committee (TC) 9.9 members also recommended ISA standard in their whitepaper on Gaseous and Particulate Contamination Guideline for data centers. TC 9.9 members represented some of the major IT equipment manufacturers e.g. IBM, HP, Cisco etc. As per standard practices, first step is to monitor air quality in data center. If contamination level shows more than G1, it means that gas-phase air filtration is required other than dust/smoke air filtration. It is important that outside fresh air entering in data center should have pressurization/re-circulated process in order to absorb corrosive gases and to maintain level within specified limit. It is also important that air quality monitoring should be conducted once in a year. Temperature and humidity should also be monitored as per standard practices to maintain level within specified limit.Keywords: corrosive gases, corrosion, electronic equipment failure, ASHRAE, hard disk
Procedia PDF Downloads 33022852 Evaluation of Diagnosis Performance Based on Pairwise Model Construction and Filtered Data
Authors: Hyun-Woo Cho
Abstract:
It is quite important to utilize right time and intelligent production monitoring and diagnosis of industrial processes in terms of quality and safety issues. When compared with monitoring task, fault diagnosis represents the task of finding process variables responsible causing a specific fault in the process. It can be helpful to process operators who should investigate and eliminate root causes more effectively and efficiently. This work focused on the active use of combining a nonlinear statistical technique with a preprocessing method in order to implement practical real-time fault identification schemes for data-rich cases. To compare its performance to existing identification schemes, a case study on a benchmark process was performed in several scenarios. The results showed that the proposed fault identification scheme produced more reliable diagnosis results than linear methods. In addition, the use of the filtering step improved the identification results for the complicated processes with massive data sets.Keywords: diagnosis, filtering, nonlinear statistical techniques, process monitoring
Procedia PDF Downloads 24322851 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico
Authors: M. Gil, R. Montalvo
Abstract:
Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.Keywords: business intelligence, predictive model, supply and demand, Mexico
Procedia PDF Downloads 12322850 A New Block Cipher for Resource-Constrained Internet of Things Devices
Authors: Muhammad Rana, Quazi Mamun, Rafiqul Islam
Abstract:
In the Internet of Things (IoT), many devices are connected and accumulate a sheer amount of data. These Internet-driven raw data need to be transferred securely to the end-users via dependable networks. Consequently, the challenges of IoT security in various IoT domains are paramount. Cryptography is being applied to secure the networks for authentication, confidentiality, data integrity and access control. However, due to the resource constraint properties of IoT devices, the conventional cipher may not be suitable in all IoT networks. This paper designs a robust and effective lightweight cipher to secure the IoT environment and meet the resource-constrained nature of IoT devices. We also propose a symmetric and block-cipher based lightweight cryptographic algorithm. The proposed algorithm increases the complexity of the block cipher, maintaining the lowest computational requirements possible. The proposed algorithm efficiently constructs the key register updating technique, reduces the number of encryption rounds, and adds a new layer between the encryption and decryption processes.Keywords: internet of things, cryptography block cipher, S-box, key management, security, network
Procedia PDF Downloads 11322849 BodeACD: Buffer Overflow Vulnerabilities Detecting Based on Abstract Syntax Tree, Control Flow Graph, and Data Dependency Graph
Authors: Xinghang Lv, Tao Peng, Jia Chen, Junping Liu, Xinrong Hu, Ruhan He, Minghua Jiang, Wenli Cao
Abstract:
As one of the most dangerous vulnerabilities, effective detection of buffer overflow vulnerabilities is extremely necessary. Traditional detection methods are not accurate enough and consume more resources to meet complex and enormous code environment at present. In order to resolve the above problems, we propose the method for Buffer overflow detection based on Abstract syntax tree, Control flow graph, and Data dependency graph (BodeACD) in C/C++ programs with source code. Firstly, BodeACD constructs the function samples of buffer overflow that are available on Github, then represents them as code representation sequences, which fuse control flow, data dependency, and syntax structure of source code to reduce information loss during code representation. Finally, BodeACD learns vulnerability patterns for vulnerability detection through deep learning. The results of the experiments show that BodeACD has increased the precision and recall by 6.3% and 8.5% respectively compared with the latest methods, which can effectively improve vulnerability detection and reduce False-positive rate and False-negative rate.Keywords: vulnerability detection, abstract syntax tree, control flow graph, data dependency graph, code representation, deep learning
Procedia PDF Downloads 17022848 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi
Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza
Abstract:
Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards
Procedia PDF Downloads 11622847 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models
Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah
Abstract:
In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model
Procedia PDF Downloads 242