Search results for: intelligent technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7254

Search results for: intelligent technique

5874 Isolation and Identification of Salmonella spp and Salmonella enteritidis, from Distributed Chicken Samples in the Tehran Province using Culture and PCR Techniques

Authors: Seyedeh Banafsheh Bagheri Marzouni, Sona Rostampour Yasouri

Abstract:

Salmonella is one of the most important common pathogens between humans and animals worldwide. Globally, the prevalence of the disease in humans is due to the consumption of food contaminated with animal-derived Salmonella. These foods include eggs, red meat, chicken, and milk. Contamination of chicken and its products with Salmonella may occur at any stage of the chicken processing chain. Salmonella infection is usually not fatal. However, its occurrence is considered dangerous in some individuals, such as infants, children, the elderly, pregnant women, or individuals with weakened immune systems. If Salmonella infection enters the bloodstream, the possibility of contamination of tissues throughout the body will arise. Therefore, determining the potential risk of Salmonella at various stages is essential from the perspective of consumers and public health. The aim of this study is to isolate and identify Salmonella from chicken samples distributed in the Tehran market using the Gold standard culture method and PCR techniques based on specific genes, invA and ent. During the years 2022-2023, sampling was performed using swabs from the liver and intestinal contents of distributed chickens in the Tehran province, with a total of 120 samples taken under aseptic conditions. The samples were initially enriched in buffered peptone water (BPW) for pre-enrichment overnight. Then, the samples were incubated in selective enrichment media, including TT broth and RVS medium, at temperatures of 37°C and 42°C, respectively, for 18 to 24 hours. Organisms that grew in the liquid medium and produced turbidity were transferred to selective media (XLD and BGA) and incubated overnight at 37°C for isolation. Suspicious Salmonella colonies were selected for DNA extraction, and PCR technique was performed using specific primers that targeted the invA and ent genes in Salmonella. The results indicated that 94 samples were Salmonella using the PCR technique. Of these, 71 samples were positive based on the invA gene, and 23 samples were positive based on the ent gene. Although the culture technique is the Gold standard, PCR is a faster and more accurate method. Rapid detection through PCR can enable the identification of Salmonella contamination in food items and the implementation of necessary measures for disease control and prevention.

Keywords: culture, PCR, salmonella spp, salmonella enteritidis

Procedia PDF Downloads 69
5873 Performance Evaluation of Wideband Code Division Multiplication Network

Authors: Osama Abdallah Mohammed Enan, Amin Babiker A/Nabi Mustafa

Abstract:

The aim of this study is to evaluate and analyze different parameters of WCDMA (wideband code division multiplication). Moreover, this study also incorporates brief yet throughout analysis of WCDMA’s components as well as its internal architecture. This study also examines different power controls. These power controls may include open loop power control, closed or inner group loop power control and outer loop power control. Different handover techniques or methods of WCDMA are also illustrated in this study. These handovers may include hard handover, inter system handover and soft and softer handover. Different duplexing techniques are also described in the paper. This study has also presented an idea about different parameters of WCDMA that leads the system towards QoS issues. This may help the operator in designing and developing adequate network configuration. In addition to this, the study has also investigated various parameters including Bit Energy per Noise Spectral Density (Eb/No), Noise rise, and Bit Error Rate (BER). After simulating these parameters, using MATLAB environment, it was investigated that, for a given Eb/No value the system capacity increase by increasing the reuse factor. Besides that, it was also analyzed that, noise rise is decreasing for lower data rates and for lower interference levels. Finally, it was examined that, BER increase by using one type of modulation technique than using other type of modulation technique.

Keywords: duplexing, handover, loop power control, WCDMA

Procedia PDF Downloads 211
5872 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 73
5871 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 441
5870 Machine Learning Algorithms for Rocket Propulsion

Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo

Abstract:

In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.

Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion

Procedia PDF Downloads 109
5869 Analysing Causal Effect of London Cycle Superhighways on Traffic Congestion

Authors: Prajamitra Bhuyan

Abstract:

Transport operators have a range of intervention options available to improve or enhance their networks. But often such interventions are made in the absence of sound evidence on what outcomes may result. Cycling superhighways were promoted as a sustainable and healthy travel mode which aims to cut traffic congestion. The estimation of the impacts of the cycle superhighways on congestion is complicated due to the non-random assignment of such intervention over the transport network. In this paper, we analyse the causal effect of cycle superhighways utilising pre-innervation and post-intervention information on traffic and road characteristics along with socio-economic factors. We propose a modeling framework based on the propensity score and outcome regression model. The method is also extended to doubly robust set-up. Simulation results show the superiority of the performance of the proposed method over existing competitors. The method is applied to analyse a real dataset on the London transport network, and the result would help effective decision making to improve network performance.

Keywords: average treatment effect, confounder, difference-in-difference, intelligent transportation system, potential outcome

Procedia PDF Downloads 238
5868 The Utilization of FSI Technique and Two-Way Particle Coupling System on Particle Dynamics in the Human Alveoli

Authors: Hassan Athari, Abdurrahim Bolukbasi, Dogan Ciloglu

Abstract:

This study represented the respiratory alveoli system, and determined the trajectory of inhaled particles more accurately using the modified three-dimensional model with deformable walls of alveoli. The study also considered the tissue tension in the model to demonstrate the effect of lung. Tissue tensions are transferred by the lung parenchyma and produce the pressure gradient. This load expands the alveoli and establishes a sub-ambient (vacuum) pressure within the lungs. Thus, at the alveolar level, the flow field and movement of alveoli wall lead to an integrated effect. In this research, we assume that the three-dimensional alveolus has a visco-elastic tissue (walls). For accurate investigation of pulmonary tissue mechanical properties on particle transport and alveolar flow field, the actual relevance between tissue movement and airflow is solved by two-way FSI (Fluid Structure Interaction) simulation technique in the alveolus. Therefore, the essence of real simulation of pulmonary breathing mechanics can be achieved by developing a coupled FSI computational model. We, therefore conduct a series of FSI simulations over a range of tissue models and breathing rates. As a result, the fluid flows and streamlines have changed during present flexible model against the rigid models and also the two-way coupling particle trajectories have changed against the one-way particle coupling.

Keywords: FSI, two-way particle coupling, alveoli, CDF

Procedia PDF Downloads 253
5867 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: social networks, community detection, modularity optimization, geographically dispersed communities

Procedia PDF Downloads 232
5866 Love and Loss: The Emergence of Shame in Romantic Information Communication Technology

Authors: C. Caudwell, R. Syed, C. Lacey

Abstract:

While the development and advancement of information communication technologies (ICTs) offers powerful opportunities for meaningful connections and relationships, shame is a significant barrier to social and cultural acceptance. In particular, artificial intelligence and socially oriented robots are increasingly becoming partners in romantic relationships with people, offering bonding, support, comfort, growth, and reciprocity. However, these relationships suffer hierarchical, anthropocentric shame that is a significant barrier to their success and longevity. This paper will present case studies of human and artificially intelligent agent relationships, in the context of internal and external shame, as cultivated, propagated, and communicated through ICT. Using an interdisciplinary methodology we aim to present a framework for technological shame, building on the experimental and emergent psychoanalytical theories of emotions. Our study finds principally that socialization is a powerful factor in the vectors of shame as experienced by humans. On a wider scale, we contribute understanding of social emotion and the phenomenon of shame proliferated through ICTs, which is at present under-explored, but vital, as society and culture is increasingly mediated through this medium.

Keywords: shame, artificial intelligence, romance, society

Procedia PDF Downloads 131
5865 Investigating the Trends in Tourism and Hospitality Industry in Nigeria at Centenary

Authors: Pius Agbebi Alaba

Abstract:

The study emphasized on the effects of contemporary and prospect trends on the development of Hospitality and Tourism in Nigeria. Specifically, the study examined globalization, safety and security, diversity, service, technology, demographic changes and price–value as contemporary trends while prospect trends such as green and Eco-lodgings, Development of mega hotels, Boutique hotels, Intelligent hotels with advanced technology using the guest’s virtual fingerprint in order to perform all the operations, increasing employee salaries in order retain the existing Staff, More emphasis on the internet and technology, Guests’ virtual and physical social network were equally examined. The methodology for the study involved review of existing related study, books, journal and internet. The findings emanated from the exercise showed clearly that the impact of both trends on the development of Hospitality and Tourism in Nigeria would bring about rapid positive transformation of her socio-economic, political and cultural environment. The implication of the study is that it will prepare both private and corporate individuals in hospitality and tourism business for the challenges inherent in both trends.

Keywords: hospitality and tourism, Nigeria's centenary, trends, implications

Procedia PDF Downloads 331
5864 Costume Portrayal In K. Asif’s Mughal E Azam

Authors: Anketa Kumar, Rajantheran Al Muniandy, Rishabh Kumar

Abstract:

For centuries, Indian costumes are admired for their great aesthetics, functional and narrative qualities. The purpose of the current study is to investigate the role of costumes as visual narratives in Hindi Cinema as Filmmaking is simply one of the most recent manifestations of the human desire to tell stories in which costume acts as a tool to be read as an Intertext by the viewers watching the films. The problem that promoted this study arose when clothes become an interesting topic when examined within the social structures in which they are worn. It is this visual image of dress worn by the character that is investigated in this research through Hindi Cinema of the 1960s, which was a reflection of the society in the realistic form. This research intends to integrate the application of Roland Barthes Semiotic theory in analyzing main movie characters in the National Award-Winning Hindi movie Mughal e Azam (1960). The research helps in filling the gap between the singular level of interpretation and another level that offers a solution towards bridging the gap in viewers' manifold interpretation of a particular movie product. This study focuses on how visual appearance communicates for building up of perception and can relate to notions of realism, defining cultural identity and status in the society. The research methodology is subjected analytical technique that employs in this research is qualitative and descriptive in nature with the use of the Freeze frame technique. The portrayal of costumes is explained with Barthes' principles of Semiotics. The freeze-frame technique stops the motion of the film on a single frame and allows the chosen image to be read as a still photograph. The finding during this research into costume portrayal in the movie was that freezing the frame in midst of running the films attracted attention towards intricate costume details, leading to record the nuanced observations of this minutiae during the movie. Given that during the application of interpretation while watching K Asif’s Mughal e Azam focused on certain aspects of costumes of the king. On the same idea, further research can be employed to strengthen the relation between costumes and visual narration.

Keywords: character portrayal, costumes, Indian cinema, semiotics, visual significance

Procedia PDF Downloads 182
5863 Bioavailability of Iron in Some Selected Fiji Foods using In vitro Technique

Authors: Poonam Singh, Surendra Prasad, William Aalbersberg

Abstract:

Iron the most essential trace element in human nutrition. Its deficiency has serious health consequences and is a major public health threat worldwide. The common deficiencies in Fiji population reported are of Fe, Ca and Zn. It has also been reported that 40% of women in Fiji are iron deficient. Therefore, we have been studying the bioavailability of iron in commonly consumed Fiji foods. To study the bioavailability it is essential to assess the iron contents in raw foods. This paper reports the iron contents and its bioavailability in commonly consumed foods by multicultural population of Fiji. The food samples (rice, breads, wheat flour and breakfast cereals) were analyzed by atomic absorption spectrophotometer for total iron and its bioavailability. The white rice had the lowest total iron 0.10±0.03 mg/100g but had high bioavailability of 160.60±0.03%. The brown rice had 0.20±0.03 mg/100g total iron content but 85.00±0.03% bioavailable. The white and brown breads showed the highest iron bioavailability as 428.30±0.11 and 269.35 ±0.02%, respectively. The Weetabix and the rolled oats had the iron contents 2.89±0.27 and 1.24.±0.03 mg/100g with bioavailability of 14.19±0.04 and 12.10±0.03%, respectively. The most commonly consumed normal wheat flour had 0.65±0.00 mg/100g iron while the whole meal and the Roti flours had 2.35±0.20 and 0.62±0.17 mg/100g iron showing bioavailability of 55.38±0.05, 16.67±0.08 and 12.90±0.00%, respectively. The low bioavailability of iron in certain foods may be due to the presence of phytates/oxalates, processing/storage conditions, cooking method or interaction with other minerals present in the food samples.

Keywords: iron, bioavailability, Fiji foods, in vitro technique, human nutrition

Procedia PDF Downloads 526
5862 Estimation and Comparison of Delay at Signalized Intersections Based on Existing Methods

Authors: Arpita Saha, Satish Chandra, Indrajit Ghosh

Abstract:

Delay implicates the time loss of a traveler while crossing an intersection. Efficiency of traffic operation at signalized intersections is assessed in terms of delay caused to an individual vehicle. Highway Capacity Manual (HCM) method and Webster’s method are the most widely used in India for delay estimation purpose. However, in India, traffic is highly heterogeneous in nature with extremely poor lane discipline. Therefore, to explore best delay estimation technique for Indian condition, a comparison was made. In this study, seven signalized intersections from three different cities where chosen. Data was collected for both during morning and evening peak hours. Only under saturated cycles were considered for this study. Delay was estimated based on the field data. With the help of Simpson’s 1/3 rd rule, delay of under saturated cycles was estimated by measuring the area under the curve of queue length and cycle time. Moreover, the field observed delay was compared with the delay estimated using HCM, Webster, Probabilistic, Taylor’s expansion and Regression methods. The drawbacks of the existing delay estimation methods to be use in Indian heterogeneous traffic conditions were figured out, and best method was proposed. It was observed that direct estimation of delay using field measured data is more accurate than existing conventional and modified methods.

Keywords: delay estimation technique, field delay, heterogeneous traffic, signalised intersection

Procedia PDF Downloads 296
5861 Smart Lean Manufacturing in the Context of Industry 4.0: A Case Study

Authors: M. Ramadan, B. Salah

Abstract:

This paper introduces a framework to digitalize lean manufacturing tools to enhance smart lean-based manufacturing environments or Lean 4.0 manufacturing systems. The paper discusses the integration between lean tools and the powerful features of recent real-time data capturing systems with the help of Information and Communication Technologies (ICT) to develop an intelligent real-time monitoring and controlling system of production operations concerning lean targets. This integration is represented in the Lean 4.0 system called Dynamic Value Stream Mapping (DVSM). Moreover, the paper introduces the practice of Radio Frequency Identification (RFID) and ICT to smartly support lean tools and practices during daily production runs to keep the lean system alive and effective. This work introduces a practical description of how the lean method tools 5S, standardized work, and poka-yoke can be digitalized and smartly monitored and controlled through DVSM. A framework of the three tools has been discussed and put into practice in a German switchgear manufacturer.

Keywords: lean manufacturing, Industry 4.0, radio frequency identification, value stream mapping

Procedia PDF Downloads 222
5860 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems

Authors: Yas Barzegaar, Atrin Barzegar

Abstract:

The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.

Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment

Procedia PDF Downloads 96
5859 Intelligent Earthquake Prediction System Based On Neural Network

Authors: Emad Amar, Tawfik Khattab, Fatma Zada

Abstract:

Predicting earthquakes is an important issue in the study of geography. Accurate prediction of earthquakes can help people to take effective measures to minimize the loss of personal and economic damage, such as large casualties, destruction of buildings and broken of traffic, occurred within a few seconds. United States Geological Survey (USGS) science organization provides reliable scientific information of Earthquake Existed throughout history & Preliminary database from the National Center Earthquake Information (NEIC) show some useful factors to predict an earthquake in a seismic area like Aleutian Arc in the U.S. state of Alaska. The main advantage of this prediction method that it does not require any assumption, it makes prediction according to the future evolution of object's time series. The article compares between simulation data result from trained BP and RBF neural network versus actual output result from the system calculations. Therefore, this article focuses on analysis of data relating to real earthquakes. Evaluation results show better accuracy and higher speed by using radial basis functions (RBF) neural network.

Keywords: BP neural network, prediction, RBF neural network, earthquake

Procedia PDF Downloads 492
5858 Evaluation of Oxidative Changes in Soybean Oil During Shelf-Life by Physico-Chemical Methods and Headspace-Liquid Phase Microextraction (HS-LPME) Technique

Authors: Maryam Enteshari, Kooshan Nayebzadeh, Abdorreza Mohammadi

Abstract:

In this study, the oxidative stability of soybean oil under different storage temperatures (4 and 25˚C) and during 6-month shelf-life was investigated by various analytical methods and headspace-liquid phase microextraction (HS-LPME) coupled to gas chromatography-mass spectrometry (GC-MS). Oxidation changes were monitored by analytical parameters consisted of acid value (AV), peroxide value (PV), p-Anisidine value (p-AV), thiobarbituric acid value (TBA), fatty acids profile, iodine value (IV), and oxidative stability index (OSI). In addition, concentrations of hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-LPME/GC-MS technique. Rate of oxidation in soybean oil which stored at 25˚C was so higher. The AV, p-AV, and TBA were gradually increased during 6 months while the amount of unsaturated fatty acids, IV, and OSI decreased. Other parameters included concentrations of both hexanal and heptanal, and PV exhibited increasing trend during primitive months of storage; then, at the end of third and fourth months a sudden decrement was understood for the concentrations of hexanal and heptanal and the amount of PV, simultaneously. The latter parameters increased again until the end of shelf-time. As a result, the temperature and time were effective factors in oxidative stability of soybean oil. Also intensive correlations were found for soybean oil at 4 ˚C between AV and TBA (r2=0.96), PV and p-AV (r2=0.9), IV and TBA (-r2=0.9), and for soybean oil stored at 4˚C between p-AV and TBA (r2=0.99).

Keywords: headspace-liquid phase microextraction, oxidation, shelf-life, soybean oil

Procedia PDF Downloads 398
5857 The Impact of Access to Microcredit Programme on Women Empowerment: A Case Study of Cowries Microfinance Bank in Lagos State, Nigeria

Authors: Adijat Olubukola Olateju

Abstract:

Women empowerment is an essential developmental tool in every economy especially in less developed countries; as it helps to enhance women's socio-economic well-being. Some empirical evidence has shown that microcredit has been an effective tool in enhancing women empowerment, especially in developing countries. This paper therefore, investigates the impact of microcredit programme on women empowerment in Lagos State, Nigeria. The study used Cowries Microfinance Bank (CMB) as a case study bank, and a total of 359 women entrepreneurs were selected by simple random sampling technique from the list of Cowries Microfinance Bank. Selection bias which could arise from non-random selection of participants or non-random placement of programme, was adjusted for by dividing the data into participant women entrepreneurs and non-participant women entrepreneurs. The data were analyzed with a Propensity Score Matching (PSM) technique. The result of the Average Treatment Effect on the Treated (ATT) obtained from the PSM indicates that the credit programme has a significant effect on the empowerment of women in the study area. It is therefore, recommended that microfinance banks should be encouraged to give loan to women and for more impact of the loan to be felt by the beneficiaries the loan programme should be complemented with other programmes such as training, grant, and periodic monitoring of programme should be encouraged.

Keywords: empowerment, microcredit, socio-economic wellbeing, development

Procedia PDF Downloads 299
5856 Pattern of Cybercrime Among Adolescents: An Exploratory Study

Authors: Mohamamd Shahjahan

Abstract:

Background: Cybercrime is common phenomenon at present both developed and developing countries. Young generation, especially adolescents now engaged internet frequently and they commit cybercrime frequently in Bangladesh. Objective: In this regard, the present study on the pattern of cybercrime among youngers of Bangladesh has been conducted. Methods and tools: This study was a cross-sectional study, descriptive in nature. Non-probability accidental sampling technique has been applied to select the sample because of the nonfinite population and the sample size was 167. A printed semi-structured questionnaire was used to collect data. Results: The study shows that adolescents mainly do hacking (94.6%), pornography (88.6%), software piracy (85 %), cyber theft (82.6%), credit card fraud (81.4%), cyber defamation (75.6%), sweet heart swindling (social network) (65.9%) etc. as cybercrime. According to findings the major causes of cybercrime among the respondents in Bangladesh were- weak laws (88.0%), defective socialization (81.4%), peer group influence (80.2%), easy accessibility to internet (74.3%), corruption (62.9%), unemployment (58.7%), and poverty (24.6%) etc. It is evident from the study that 91.0% respondents used password cracker as the techniques of cyber criminality. About 76.6%, 72.5%, 71.9%, 68.3% and 60.5% respondents’ technique was key loggers, network sniffer, exploiting, vulnerability scanner and port scanner consecutively. Conclusion: The study concluded that pattern of cybercrimes is frequently changing and increasing dramatically. Finally, it is recommending that the private public partnership and execution of existing laws can be controlling this crime.

Keywords: cybercrime, adolescents, pattern, internet

Procedia PDF Downloads 77
5855 Research on Evaluation Method of Urban Road Section Traffic Safety Status Based on Video Information

Authors: Qiang Zhang, Xiaojian Hu

Abstract:

Aiming at the problem of the existing real-time evaluation methods for traffic safety status, a video information-based urban road section traffic safety status evaluation method was established, and the rapid detection method of traffic flow parameters based on video information is analyzed. The concept of the speed dispersion of the road section that affects the traffic safety state of the urban road section is proposed, and the method of evaluating the traffic safety state of the urban road section based on the speed dispersion of the road section is established. Experiments show that the proposed method can reasonably evaluate the safety status of urban roads in real-time, and the evaluation results can provide a corresponding basis for the traffic management department to formulate an effective urban road section traffic safety improvement plan.

Keywords: intelligent transportation system, road traffic safety, video information, vehicle speed dispersion

Procedia PDF Downloads 158
5854 A Phishing Email Detection Approach Using Machine Learning Techniques

Authors: Kenneth Fon Mbah, Arash Habibi Lashkari, Ali A. Ghorbani

Abstract:

Phishing e-mails are a security issue that not only annoys online users, but has also resulted in significant financial losses for businesses. Phishing advertisements and pornographic e-mails are difficult to detect as attackers have been becoming increasingly intelligent and professional. Attackers track users and adjust their attacks based on users’ attractions and hot topics that can be extracted from community news and journals. This research focuses on deceptive Phishing attacks and their variants such as attacks through advertisements and pornographic e-mails. We propose a framework called Phishing Alerting System (PHAS) to accurately classify e-mails as Phishing, advertisements or as pornographic. PHAS has the ability to detect and alert users for all types of deceptive e-mails to help users in decision making. A well-known email dataset has been used for these experiments and based on previously extracted features, 93.11% detection accuracy is obtainable by using J48 and KNN machine learning techniques. Our proposed framework achieved approximately the same accuracy as the benchmark while using this dataset.

Keywords: phishing e-mail, phishing detection, anti phishing, alarm system, machine learning

Procedia PDF Downloads 334
5853 Students’ Attitudes towards Reading as a Determinant of Performance in O’ Level English in Oyo State Secondary Schools, Nigeria

Authors: Adebimpe Olubunmi Adebanjo

Abstract:

This study observed students’ attitudes towards reading as a determinant of performance in O’ Level English in Oyo state secondary schools. Random sampling technique was used to select two schools from each of the five geo-political zones of the state while stratified sampling technique was used to select twenty students from each of the ten schools. A researcher designed questionnaire was used to gather information on students’ attitudes while a prepared test based on O’ Level syllabus was stapled to each of the questionnaire to ascertain their level of achievement in O’ Level English. The Percentage, Mean, Standard Deviation, Chi-square and Pearson Contingency Coefficient were used to answer and test the research questions and hypotheses raised. The findings showed that the general attitude of students towards reading was ambivalent; the general level of achievement was also low. The findings also revealed that there was a significant difference in the attitudes of students to reading on the basis of gender and home background. Students from educated homes also had better attitudes towards reading than their counterparts from illiterate homes. The findings also showed that there was a significant relationship between students’ attitudes to reading and their performance in O’ Level English. Students with positive attitude to reading had better grades in O’ Level English than students with ambivalent and negative attitudes. Based on the findings, it was recommended that students should change their attitudes to reading; the school and the home were also advised to always encourage students to read.

Keywords: positive, ambivalent, negative attitudes, o' level English

Procedia PDF Downloads 250
5852 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique

Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian

Abstract:

Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.

Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction

Procedia PDF Downloads 77
5851 Endometrioma Ethanol Sclerotherapy

Authors: Lamia Bensissaid

Abstract:

Goals: Endometriosis affects 6 to 10% of women of childbearing age. 17 to 44% of them have ovarian endometriomas. Medical and surgical treatments represent the two therapeutic axes with which PMA can be associated. Laparoscopic intraperitoneal ovarian cystectomy is described as the reference technique in the management of endometriomas by learned societies (CNGOF, ESHRE, NICE). However, it leads to a significant short-term reduction in the AMH level and the number of antral follicles, especially in cases of bilateral cystectomy, large cyst size or cystectomy after recurrence. Often, the disease is at an advanced stage with several surgical patients. Most have adhesions, which increase the risk of surgical complications and suboptimal resection and, therefore recurrence of the cyst. These results led to a change of opinion towards a conservative approach. Sclerotherapy is an old technique which acts by fibrinoid necrosis. It consists of injecting a sclerosing agent into the cyst cavity. Results : Recurrence was less than 15% for a 12-month follow-up; these rates are comparable to those of surgery. It does not seem to have a negative impact on ovarian reserve, but this is not sufficiently evaluated. It has an advantage in IVF pregnancy rates compared to cystectomy, particularly in cases of recurrent endometriomas. It has the advantages: · To be done on an outpatient basis. · To be inexpensive. · To avoid sometimes difficult and iterative surgery: · To allow an increase in pregnancy rates and the preservation of the ovarian reserve compared to iterative surgery. · of great interest in cases of bilateral endometriomas (kissing ovaries) or recurrent endometriomas. Conclusions: Ethanol sclerotherapy could be a good alternative to surgery.

Keywords: Endometrioma, Sclerotherapy, infertility, Ethanol

Procedia PDF Downloads 58
5850 Quantification of Lustre in Textile Fibers by Image Analysis

Authors: Neelesh Bharti Shukla, Suvankar Dutta, Esha Sharma, Shrikant Ralebhat, Gurudatt Krishnamurthy

Abstract:

A key component of the physical attribute of textile fibers is lustre. It is a complex phenomenon arising from the interaction of light with fibers, yarn and fabrics. It is perceived as the contrast difference between the bright areas (specular reflection) and duller backgrounds (diffused reflection). Lustre of fibers is affected by their surface structure, morphology, cross-section profile as well as the presence of any additives/registrants. Due to complexities in measurements, objective measurements such as gloss meter do not give reproducible quantification of lustre. Other instruments such as SAMBA hair systems are expensive. In light of this, lustre quantification has largely remained subjective, judged visually by experts, but prone to errors. In this development, a physics-based approach was conceptualized and demonstrated. We have developed an image analysis based technique to quantify visually observed differences in lustre of fibers. Cellulosic fibers, produced with different approaches, with visually different levels of lustre were photographed under controlled optics. These images were subsequently analyzed using a configured software system. The ratio of Intensity of light from bright (specular reflection) and dull (diffused reflection) areas was used to numerically represent lustre. In the next step, the set of samples that were not visually distinguishable easily were also evaluated by the technique and it was established that quantification of lustre is feasible.

Keywords: lustre, fibre, image analysis, measurement

Procedia PDF Downloads 166
5849 A Fast and Cost-Effective Method to Monitor Microplastics in Compost and Soiduration of Enterococcus Faecalis Penetration in Environmentally Exposed Root Canals Obturated With Lateral Condensation Technique

Authors: N. Thawornwisit, P. Pradoo, S. Nuypree, L. Jarukasetrporn, S. Jitpukdeebodintra

Abstract:

Objective: The aim of this study was to evaluate the duration of the Enterococcus faecalis (E. faecalis) penetration into the gap between root canal wall and filling material at a 3 to 6 mm distance from the cementoenamel junction (CEJ) in the dislodged temporary filling, in vitro. Material and methods: Thirty-four single root canal mandibular premolars were divided into two experimental groups (N = 15) and one negative control (N = 4). Root canals were prepared and obturated with gutta-percha using lateral condensation technique, X-ray checked, and sterilized. Leakages were set up using the modified bacterial leakage model, and E. faecalis was used as a microbial marker. Leakages were evaluated at 3 and 7 days by culturing gutta-percha and dentine drilled from a 3-6 mm distance from CEJ. Broth turbidity was recorded and compared. Result: All four negative control and the 3-day experimental group showed no broth turbidity. For the 7-day experimental group, there was 33.3% leakage. Conclusion: Penetration of E. faecalis into the gap between root canal wall and filling material at a 3 to 6 mm distance from CEJ in the dislodged temporary filling were not found at three days. However, at seven days of exposure, bacteria could penetrate into the interface of the root canal and filling materials.

Keywords: coronal leakage, bacterial leakage model, enterococcus faecalis

Procedia PDF Downloads 89
5848 5G Future Hyper-Dense Networks: An Empirical Study and Standardization Challenges

Authors: W. Hashim, H. Burok, N. Ghazaly, H. Ahmad Nasir, N. Mohamad Anas, A. F. Ismail, K. L. Yau

Abstract:

Future communication networks require devices that are able to work on a single platform but support heterogeneous operations which lead to service diversity and functional flexibility. This paper proposes two cognitive mechanisms termed cognitive hybrid function which is applied in multiple broadband user terminals in order to maintain reliable connectivity and preventing unnecessary interferences. By employing such mechanisms especially for future hyper-dense network, we can observe their performances in terms of optimized speed and power saving efficiency. Results were obtained from several empirical laboratory studies. It was found that selecting reliable network had shown a better optimized speed performance up to 37% improvement as compared without such function. In terms of power adjustment, our evaluation of this mechanism can reduce the power to 5dB while maintaining the same level of throughput at higher power performance. We also discuss the issues impacting future telecommunication standards whenever such devices get in place.

Keywords: dense network, intelligent network selection, multiple networks, transmit power adjustment

Procedia PDF Downloads 373
5847 Evaluation of Geotechnical Parameters at Nubian Habitations in Kurkur Area, Aswan, Egypt

Authors: R. E. Fat-Helbary, A. A. Abdel-latief, M. S. Arfa, Alaa Mostafa

Abstract:

The Egyptian Government proposed a general plan, aiming at constructing new settlements for Nubian in south Aswan in different places around Nasser Lake, one of these settlements in Kurkur area. The Nubian habitations in Wadi Kurkur are located around 30 km southwest of Aswan City. This area are affecting by near distance earthquakes from Kalabsha faults system. The shallow seismic refraction technique was conducted at the study area, to evaluate the soil and rock material quality and geotechnical parameters, in addition to the detection of the subsurface ground model under the study area. The P and S-wave velocities were calculated. The surface layer has P-wave, velocity ranges from 900 m/sec to 1625 m/sec and S-wave velocity ranges from 650 m/sec to 1400 m/sec. On the other hand the bedrock has P-wave velocity ranges from 1300 m/sec to 1980 m/sec and S-wave velocity ranges from 1050 m/sec to1725 m/sec. Measuring Vp and Vs velocities together with bulk density are calculated and used to extract the mechanical properties and geotechnical parameters of the foundation material at the study area. Output of this study is very important for solving the problems, which associated with the construction of various civil engineering purposes, for land use planning and for earthquakes resistant structure design.

Keywords: shallow seismic refraction technique, Kurkur area, p and s-wave velocities, geotechnical parameters, bulk density, Kalabsha faults

Procedia PDF Downloads 423
5846 Functional Neural Network for Decision Processing: A Racing Network of Programmable Neurons Where the Operating Model Is the Network Itself

Authors: Frederic Jumelle, Kelvin So, Didan Deng

Abstract:

In this paper, we are introducing a model of artificial general intelligence (AGI), the functional neural network (FNN), for modeling human decision-making processes. The FNN is composed of multiple artificial mirror neurons (AMN) racing in the network. Each AMN has a similar structure programmed independently by the users and composed of an intention wheel, a motor core, and a sensory core racing at a specific velocity. The mathematics of the node’s formulation and the racing mechanism of multiple nodes in the network will be discussed, and the group decision process with fuzzy logic and the transformation of these conceptual methods into practical methods of simulation and in operations will be developed. Eventually, we will describe some possible future research directions in the fields of finance, education, and medicine, including the opportunity to design an intelligent learning agent with application in AGI. We believe that FNN has a promising potential to transform the way we can compute decision-making and lead to a new generation of AI chips for seamless human-machine interactions (HMI).

Keywords: neural computing, human machine interation, artificial general intelligence, decision processing

Procedia PDF Downloads 121
5845 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 547