Search results for: Network coverage
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5195

Search results for: Network coverage

2675 Impact of Agricultural Infrastructure on Diffusion of Technology of the Sample Farmers in North 24 Parganas District, West Bengal

Authors: Saikat Majumdar, D. C. Kalita

Abstract:

The Agriculture sector plays an important role in the rural economy of India. It is the backbone of our Indian economy and is the dominant sector in terms of employment and livelihood. Agriculture still contributes significantly to export earnings and is an important source of raw materials as well as of demand for many industrial products particularly fertilizers, pesticides, agricultural implements and a variety of consumer goods, etc. The performance of the agricultural sector influences the growth of Indian economy. According to the 2011 Agricultural Census of India, an estimated 61.5 percentage of rural populations are dependent on agriculture. Proper Agricultural infrastructure has the potential to transform the existing traditional agriculture into a most modern, commercial and dynamic farming system in India through its diffusion of technology. The rate of adoption of modern technology reflects the progress of development in agricultural sector. The adoption of any improved agricultural technology is also dependent on the development of road infrastructure or road network. The present study was consisting of 300 sample farmers out which 150 samples was taken from the developed area and rest 150 samples was taken from underdeveloped area. The samples farmers under develop and underdeveloped areas were collected by using Multistage Random Sampling procedure. In the first stage, North 24 Parganas District have been selected purposively. Then from the district, one developed and one underdeveloped block was selected randomly. In the third phase, 10 villages have been selected randomly from each block. Finally, from each village 15 sample farmers was selected randomly. The extents of adoption of technology in different areas were calculated through various parameters. These are percentage area under High Yielding Variety Cereals, percentage area under High Yielding Variety pulses, area under hybrids vegetables, irrigated area, mechanically operated area, amount spent on fertilizer and pesticides, etc. in both developed and underdeveloped areas of North 24 Parganas District, West Bengal. The percentage area under High Yielding Variety Cereals in the developed and underdeveloped areas was 34.86 and 22.59. 42.07 percentages and 31.46 percentages for High Yielding Variety pulses respectively. In the case the area under irrigation it was 57.66 and 35.71 percent while for the mechanically operated area it was 10.60 and 3.13 percent respectively in developed and underdeveloped areas of North 24 Parganas district, West Bengal. It clearly showed that the extent of adoption of technology was significantly higher in the developed area over underdeveloped area. Better road network system helps the farmers in increasing his farm income, farm assets, cropping intensity, marketed surplus and the rate of adoption of new technology. With this background, an attempt is made in this paper to study the impact of Agricultural Infrastructure on the adoption of modern technology in agriculture in North 24 Parganas District, West Bengal.

Keywords: agricultural infrastructure, adoption of technology, farm income, road network

Procedia PDF Downloads 86
2674 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables

Authors: Marianna Maiaru, Gregory M. Odegard

Abstract:

During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.

Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling

Procedia PDF Downloads 80
2673 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 144
2672 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 131
2671 Study of Information Technology Support to Knowledge Sharing in Social Enterprises

Authors: Maria Granados

Abstract:

Information technology (IT) facilitates the management of knowledge in organisations through the effective leverage of collective experience and knowledge of employees. This supports information processing needs, as well as enables and facilitates sense-making activities of knowledge workers. The study of IT support for knowledge management (KM) has been carried out mainly in larger organisations where resources and competitive conditions can trigger the use of KM. However, there is still a lack of understanding on how IT can support the management of knowledge under different organisational settings influenced by: constant tensions between social and economic objectives, more focus on sustainability than competiveness, limited resources, and high levels of democratic participation and intrinsic motivations among employees. All these conditions are presented in Social Enterprises (SEs), which are normally micro and small businesses that trade to tackle social problems, improve communities, people’s life chances, and the environment. Thus, their importance to society and economies is increasing. However, there is still a need for more understanding of how these organisations operate, perform, innovate and scale-up. This knowledge is crucial to design and provide accurate strategies to enhance the sector and increase its impact and coverage. To obtain a conceptual and empirical understanding of how IT can facilitate KM in the particular organisational conditions of SEs, a quantitative study was conducted with 432 owners and senior members of SEs in UK, underpinned by 21 interviews. The findings demonstrated how IT was supporting more the recovery and storage of necessary information in SEs, and less the collaborative work and communication among enterprise members. However, it was established that SEs were using cloud solutions, web 2.0 tools, Skype and centralised shared servers to manage informally their knowledge. The possible impediments for SEs to support themselves more on IT solutions can be linked mainly to economic and human constraints. These findings elucidate new perspectives that can contribute not only to SEs and SE supporters, but also to other businesses.

Keywords: social enterprises, knowledge management, information technology, collaboration, small firms

Procedia PDF Downloads 254
2670 Iodine Nutritional Knowledge of Food Handlers: A Capricorn and Waterberg District Study, Limpopo Province, South Africa

Authors: Solomon Ngoako Mabapa, Selekane Ananias Motadi, Nteseng Mailula, Hlekani Vanessa Mbhatsani, Lindelani Fhumudzani Mushaphi

Abstract:

Background: South Africa has indeed made good progress towards IDD elimination, as far as implementation of salt iodization and coverage of iodized salt are concerned, the education and promotion aspects of the iodized salt intervention are seriously lacking. Objective: To determine the iodine nutritional knowledge of food handlers at primary schools under the National School Nutrition Programme in Capricorn and Waterberg district. Design: This study included 300 food handlers recruited from 95 primary schools in Capricorn district and 105 primary schools in Waterberg district, Limpopo Province, South Africa. Primary schools and study participants where conveniently selected. The data was collected by means of a structured questionnaire. Information obtained was on the socio-demographic characteristics of the participants, general knowledge on salt fortification and knowledge test. Results: The iodine knowledge for the food handlers in two districts was poor with the entire population’s iodine nutritional knowledge of 12% on the Lickert scale. The mean score on the Lickert scale for Capricorn and Waterberg districts was 17% and 8.6% respectively indicated poor iodine nutritional knowledge. Conclusion: The two districts had poor iodine nutritional knowledge. Giving nutrition education to the public on the importance of iodine and the consequences of iodine deficiency disorder (IDD) and continue advocacy on mass media on the iodine fortification as an intervention strategy to combat the escalating problem of micronutrient malnutrition control.

Keywords: food handlers, nutritional knowledge, iodine, National School Nutrition Programme

Procedia PDF Downloads 220
2669 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells

Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani

Abstract:

Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.

Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade

Procedia PDF Downloads 409
2668 An Investigation Enhancing E-Voting Application Performance

Authors: Aditya Verma

Abstract:

E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.

Keywords: blockchain, parallel bft, consensus algorithms, performance

Procedia PDF Downloads 155
2667 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 103
2666 Killing Your Children to Hurt Your Partner: Motivations for Revenge Filicide

Authors: Melanie Moen, Christiaan Bezuidenhout

Abstract:

Cases of parents murdering their offspring are incomprehensible but sadly as old as humanity itself. The act of killing your own child is known as filicide. Revenge filicide is an act where one parent kills their own offspring for retribution for hurting and upsetting the other parent. The true extent of filicide in South Africa is unknown, but in the United States, filicide constitutes more or less 2.5% of all murders. The focus of this contribution is to extend the knowledge of revenge filicide. Data was collected through court documents and newspaper articles. Newspapers that cover murder cases are between 75% to 100% accurate compared to official sources. Often family-related murders are violent in nature, and for this reason, these crimes receive extensive media coverage. The cases of twenty revenge filicide murderers (14 male and 6 female) were qualitatively analyzed to determine the motivations and offense characteristics of revenge filicide offenders. Findings related to a loss of social identity due to rejection; extreme rage-type anger; external locus of control; sadism; a desire to cause pain, and a need to inflict harm. The initial emotional response may escalate from mild anger to a level of narcissistic rage which eventually culminates in the murdering of the child to punish and hurt the other parent and to restore control. To our knowledge, our study is the first to systematically examine the motivations related to revenge filicides from a South African perspective. Filicide is a complex phenomenon with diverse possibilities and reasons why it occurs. However, it was apparent in this study that the motivations for revenge filicides were often linked to complex personal and interpersonal relationship problems. Further research within this field is imperative.

Keywords: revenge filicide, child murder, rage, anger, narcissistic rage, parent kills child

Procedia PDF Downloads 69
2665 Nursing System Development in Patients Undergoing Operation in 3C Ward: Early Ambulation in Patients with Head and Neck Cancer

Authors: Artitaya Sabangbal, Darawan Augsornwan, Palakorn Surakunprapha, Lalida Petphai

Abstract:

Background: Srinagarind Hospital Ward 3C has about 180 cases of patients with head and neck cancer per year. Almost all of these patients suffer with pain, fatigue, low self image, swallowing problem and when the tumor is larger they will have breathing problem. Many of them have complication after operation such as pressure sore, pneumonia, deep vein thrombosis. Nursing activity is very important to prevent the complication especially promoting patients early ambulation. The objective of this study was to develop early ambulation protocol for patients with head and neck cancer undergoing operation. Method: this study is one part of nursing system development in patients undergoing operation in Ward 3C. It is a participation action research divided into 3 phases Phase 1 Situation review: In this phase we review the clinical outcomes, process of care, from document such as nurses note and interview nurses, patients and family about early ambulation. Phase 2 Searching nursing intervention about early ambulation from previous study then establish protocol . This phase we have picture package of early ambulation. Phase 3 implementation and evaluation. Result: Patients with head and neck cancer after operation can follow early ambulation protocol 100%, 85 % of patients can follow protocol within 2 days after operation and 100% can follow protocol within 3 days. No complications occur. Patients satisfaction in very good level is 58% and in good level is 42% Length of hospital stay is 6 days in patients with wide excision and 16 day in patients with flap coverage. Conclusion: The early ambulation protocol is appropriate for patients with head and neck cancer who undergo operation. This can restore physical health, reduce complication and increase patients satisfaction.

Keywords: nursing system, early ambulation, head and neck cancer, operation

Procedia PDF Downloads 213
2664 A Mathematical Framework for Expanding a Railway’s Theoretical Capacity

Authors: Robert L. Burdett, Bayan Bevrani

Abstract:

Analytical techniques for measuring and planning railway capacity expansion activities have been considered in this article. A preliminary mathematical framework involving track duplication and section sub divisions is proposed for this task. In railways, these features have a great effect on network performance and for this reason they have been considered. Additional motivations have also arisen from the limitations of prior models that have not included them.

Keywords: capacity analysis, capacity expansion, railways, track sub division, track duplication

Procedia PDF Downloads 344
2663 Disease Trajectories in Relation to Poor Sleep Health in the UK Biobank

Authors: Jiajia Peng, Jianqing Qiu, Jianjun Ren, Yu Zhao

Abstract:

Background: Insufficient sleep has been focused on as a public health epidemic. However, a comprehensive analysis of disease trajectory associated with unhealthy sleep habits is still unclear currently. Objective: This study sought to comprehensively clarify the disease's trajectory in relation to the overall poor sleep pattern and unhealthy sleep behaviors separately. Methods: 410,682 participants with available information on sleep behaviors were collected from the UK Biobank at the baseline visit (2006-2010). These participants were classified as having high- and low risk of each sleep behavior and were followed from 2006 to 2020 to identify the increased risks of diseases. We used Cox regression to estimate the associations of high-risk sleep behaviors with the elevated risks of diseases, and further established diseases trajectory using significant diseases. The low-risk unhealthy sleep behaviors were defined as the reference. Thereafter, we also examined the trajectory of diseases linked with the overall poor sleep pattern by combining all of these unhealthy sleep behaviors. To visualize the disease's trajectory, network analysis was used for presenting these trajectories. Results: During a median follow-up of 12.2 years, we noted 12 medical conditions in relation to unhealthy sleep behaviors and the overall poor sleep pattern among 410,682 participants with a median age of 58.0 years. The majority of participants had unhealthy sleep behaviors; in particular, 75.62% with frequent sleeplessness, and 72.12% had abnormal sleep durations. Besides, a total of 16,032 individuals with an overall poor sleep pattern were identified. In general, three major disease clusters were associated with overall poor sleep status and unhealthy sleep behaviors according to the disease trajectory and network analysis, mainly in the digestive, musculoskeletal and connective tissue, and cardiometabolic systems. Of note, two circularity disease pairs (I25→I20 and I48→I50) showed the highest risks following these unhealthy sleep habits. Additionally, significant differences in disease trajectories were observed in relation to sex and sleep medication among individuals with poor sleep status. Conclusions: We identified the major disease clusters and high-risk diseases following participants with overall poor sleep health and unhealthy sleep behaviors, respectively. It may suggest the need to investigate the potential interventions targeting these key pathways.

Keywords: sleep, poor sleep, unhealthy sleep behaviors, disease trajectory, UK Biobank

Procedia PDF Downloads 70
2662 GRABTAXI: A Taxi Revolution in Thailand

Authors: Danuvasin Charoen

Abstract:

The study investigates the business process and business model of GRABTAXI. The paper also discusses how the company implemented strategies to gain competitive advantages. The data is derived from the analysis of secondary data and the in-depth interviews among staffs, taxi drivers, and key customers. The findings indicated that the company’s competitive advantages come from being the first mover, emphasising on the ease of use and tangible benefits of application, and using network effect strategy.

Keywords: taxi, mobile application, innovative business model, Thailand

Procedia PDF Downloads 288
2661 Audit of TPS photon beam dataset for small field output factors using OSLDs against RPC standard dataset

Authors: Asad Yousuf

Abstract:

Purpose: The aim of the present study was to audit treatment planning system beam dataset for small field output factors against standard dataset produced by radiological physics center (RPC) from a multicenter study. Such data are crucial for validity of special techniques, i.e., IMRT or stereotactic radiosurgery. Materials/Method: In this study, multiple small field size output factor datasets were measured and calculated for 6 to 18 MV x-ray beams using the RPC recommend methods. These beam datasets were measured at 10 cm depth for 10 × 10 cm2 to 2 × 2 cm2 field sizes, defined by collimator jaws at 100 cm. The measurements were made with a Landauer’s nanoDot OSLDs whose volume is small enough to gather a full ionization reading even for the 1×1 cm2 field size. At our institute the beam data including output factors have been commissioned at 5 cm depth with an SAD setup. For comparison with the RPC data, the output factors were converted to an SSD setup using tissue phantom ratios. SSD setup also enables coverage of the ion chamber in 2×2 cm2 field size. The measured output factors were also compared with those calculated by Eclipse™ treatment planning software. Result: The measured and calculated output factors are in agreement with RPC dataset within 1% and 4% respectively. The large discrepancies in TPS reflect the increased challenge in converting measured data into a commissioned beam model for very small fields. Conclusion: OSLDs are simple, durable, and accurate tool to verify doses that delivered using small photon beam fields down to a 1x1 cm2 field sizes. The study emphasizes that the treatment planning system should always be evaluated for small field out factors for the accurate dose delivery in clinical setting.

Keywords: small field dosimetry, optically stimulated luminescence, audit treatment, radiological physics center

Procedia PDF Downloads 310
2660 Intrusion Detection Techniques in NaaS in the Cloud: A Review

Authors: Rashid Mahmood

Abstract:

The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields.

Keywords: IDS, cloud, naas, detection

Procedia PDF Downloads 303
2659 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 86
2658 Cinematic Liberty vs. Offending Social, Religious Beliefs: With Special Reference to the Controversial Contents in Cinema and Print Media

Authors: Govind Ji Pandey

Abstract:

The divergent opinions in the society are important for its development but with reasonable restrictions. The world recently witnessed one of the most violent protests by a group against the editor and publisher of the magazine ‘Charlie Hebdo’ for publishing cartoon of their religious leader. The supporter of freedom of speech and expression around the world were in shock and termed it the strongest attack against the free speech. People all around the world condemned the killing of the journalists but many soft voices from several corners were also coming for reasonable restrictions on the freedom of speech and expression. Of late, Indian society has witnessed many protests and supports of films with controversial content. It is the beauty of the Indian democracy which gives an opportunity to all for discussion and debate on any issue that challenges established social norms. However, many organizations as well as individuals misuse it for their personal benefits. There have been many film directors who faced protest from several quarters for their controversial themes. This research aims at analyzing the controversial contents published in print media and shown in films. To understand the nature and frequency of such media reports, content analysis technique is used. The research also highlights the perception of the public regarding the controversies. For getting the popular opinion on the coverage of controversial content in cinema and print media, five hundred people from Lucknow, UP, India were randomly selected. The findings of this research are important to understand the response of media and society towards the controversial content presented in cinema and print media. The research highlights that how a handful of people curb free speech in a democratic country like India.

Keywords: cinema, censor board, free speech, liberty, social-religious beliefs

Procedia PDF Downloads 246
2657 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 53
2656 Remote Sensing and GIS Based Methodology for Identification of Low Crop Productivity in Gautam Buddha Nagar District

Authors: Shivangi Somvanshi

Abstract:

Poor crop productivity in salt-affected environment in the country is due to insufficient and untimely canal supply to agricultural land and inefficient field water management practices. This could further degrade due to inadequate maintenance of canal network, ongoing secondary soil salinization and waterlogging, worsening of groundwater quality. Large patches of low productivity in irrigation commands are occurring due to waterlogging and salt-affected soil, particularly in the scarcity rainfall year. Satellite remote sensing has been used for mapping of areas of low crop productivity, waterlogging and salt in irrigation commands. The spatial results obtained for these problems so far are less reliable for further use due to rapid change in soil quality parameters over the years. The existing spatial databases of canal network and flow data, groundwater quality and salt-affected soil were obtained from the central and state line departments/agencies and were integrated with GIS. Therefore, an integrated methodology based on remote sensing and GIS has been developed in ArcGIS environment on the basis of canal supply status, groundwater quality, salt-affected soils, and satellite-derived vegetation index (NDVI), salinity index (NDSI) and waterlogging index (NSWI). This methodology was tested for identification and delineation of area of low productivity in the Gautam Buddha Nagar district (Uttar Pradesh). It was found that the area affected by this problem lies mainly in Dankaur and Jewar blocks of the district. The problem area was verified with ground data and was found to be approximately 78% accurate. The methodology has potential to be used in other irrigation commands in the country to obtain reliable spatial data on low crop productivity.

Keywords: remote sensing, GIS, salt affected soil, crop productivity, Gautam Buddha Nagar

Procedia PDF Downloads 273
2655 The Nexus between Migration and Human Security: The Case of Ethiopian Female Migration to Sudan

Authors: Anwar Hassen Tsega

Abstract:

International labor migration is an integral part of the modern globalized world. However, the phenomenon has its roots in some earlier periods in human history. This paper discusses the relatively new phenomenon of female migration in Africa. In the past, African women migrants were only spouses or dependent family members. But as modernity swept most African societies, with rising unemployment rates, there is evidence everywhere in Africa that women labor migration is a growing phenomenon that deserves to be understood in the context of human security research. This work explores these issues further, focusing on the experience of Ethiopian women labor migrants to Sudan. The migration of Ethiopian people to Sudan is historical; nevertheless, labor migration mainly started since the discovery and subsequent exploration of oil in the Sudan. While the paper is concerned with the human security aspect of the migrant workers, we need to be certain that the migration process will provide with a decent wage, good working conditions, the necessary social security coverage, and labor protection as a whole. However, migration to Sudan is not always safe and female migrants become subject to violence at the hands of brokers, employers and migration officials. For this matter, the paper argued that identifying the vulnerable stages and major problem facing female migrant workers at various stages of migration is a prerequisite to combat the problem and secure the lives of the migrant workers. The major problems female migrants face include extra degrees of gender-based violence, underpayment, various forms of abuse like verbal, physical and sexual and other forms of torture which include beating and slaps. This peculiar situation could be attributed to the fact that most of these women are irregular migrants and fall under the category of unskilled and/or illiterate migrants.

Keywords: Ethiopia, human security, labor migration, Sudan

Procedia PDF Downloads 232
2654 Green Crypto Mining: A Quantitative Analysis of the Profitability of Bitcoin Mining Using Excess Wind Energy

Authors: John Dorrell, Matthew Ambrosia, Abilash

Abstract:

This paper employs econometric analysis to quantify the potential profit wind farms can receive by allocating excess wind energy to power bitcoin mining machines. Cryptocurrency mining consumes a substantial amount of electricity worldwide, and wind energy produces a significant amount of energy that is lost because of the intermittent nature of the resource. Supply does not always match consumer demand. By combining the weaknesses of these two technologies, we can improve efficiency and a sustainable path to mine cryptocurrencies. This paper uses historical wind energy from the ERCOT network in Texas and cryptocurrency data from 2000-2021, to create 4-year return on investment projections. Our research model incorporates the price of bitcoin, the price of the miner, the hash rate of the miner relative to the network hash rate, the block reward, the bitcoin transaction fees awarded to the miners, the mining pool fees, the cost of the electricity and the percentage of time the miner will be running to demonstrate that wind farms generate enough excess energy to mine bitcoin profitably. Excess wind energy can be used as a financial battery, which can utilize wasted electricity by changing it into economic energy. The findings of our research determine that wind energy producers can earn profit while not taking away much if any, electricity from the grid. According to our results, Bitcoin mining could give as much as 1347% and 805% return on investment with the starting dates of November 1, 2021, and November 1, 2022, respectively, using wind farm curtailment. This paper is helpful to policymakers and investors in determining efficient and sustainable ways to power our economic future. This paper proposes a practical solution for the problem of crypto mining energy consumption and creates a more sustainable energy future for Bitcoin.

Keywords: bitcoin, mining, economics, energy

Procedia PDF Downloads 19
2653 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 146
2652 Constructing a Probabilistic Ontology from a DBLP Data

Authors: Emna Hlel, Salma Jamousi, Abdelmajid Ben Hamadou

Abstract:

Every model for knowledge representation to model real-world applications must be able to cope with the effects of uncertain phenomena. One of main defects of classical ontology is its inability to represent and reason with uncertainty. To remedy this defect, we try to propose a method to construct probabilistic ontology for integrating uncertain information in an ontology modeling a set of basic publications DBLP (Digital Bibliography & Library Project) using a probabilistic model.

Keywords: classical ontology, probabilistic ontology, uncertainty, Bayesian network

Procedia PDF Downloads 331
2651 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer

Authors: Ankan Roy, Niharika, Samir Kumar Patra

Abstract:

Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.

Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions

Procedia PDF Downloads 112
2650 An Inventory Management Model to Manage the Stock Level for Irregular Demand Items

Authors: Riccardo Patriarca, Giulio Di Gravio, Francesco Costantino, Massimo Tronci

Abstract:

An accurate inventory management policy acquires a crucial role in the several high-availability sectors. In these sectors, due to the high-cost of spares and backorders, an (S-1, S) replenishment policy is necessary for high-availability items. The policy enables the shipment of a substitute efficient item anytime the inventory size decreases by one. This policy can be modelled following the Multi-Echelon Technique for Recoverable Item Control (METRIC). The METRIC is a system-based technique that allows defining the optimum stock level in a multi-echelon network, adopting measures in line with the decision-maker’s perspective. The METRIC defines an availability-cost function with inventory costs and required service levels, using as inputs data about the demand trend, the supplying and maintenance characteristics of the network and the budget/availability constraints. The traditional METRIC relies on the hypothesis that a Poisson distribution well represents the demand distribution in case of items with a low failure rate. However, in this research, we will explore the effects of using a Poisson distribution to model the demand of low failure rate items characterized by an irregular demand trend. This characteristic of a demand is not included in the traditional METRIC formulation leading to the need of revising its traditional formulation. Using the CV (Coefficient of Variation) and ADI (Average inter-Demand Interval) classification, we will define the inherent flaws of Poisson-based METRIC for irregular demand items, defining an innovative ad hoc distribution which can better fit the irregular demands. This distribution will allow defining proper stock levels to reduce stocking and backorder costs due to the high irregularities in the demand trend. A case study in the aviation domain will clarify the benefits of this innovative METRIC approach.

Keywords: METRIC, inventory management, irregular demand, spare parts

Procedia PDF Downloads 331
2649 Prototype of an Interactive Toy from Lego Robotics Kits for Children with Autism

Authors: Ricardo A. Martins, Matheus S. da Silva, Gabriel H. F. Iarossi, Helen C. M. Senefonte, Cinthyan R. S. C. de Barbosa

Abstract:

This paper is the development of a concept of the man/robot interaction. More accurately in developing of an autistic child that have more troubles with interaction, here offers an efficient solution, even though simple; however, less studied for this public. This concept is based on code applied thought out the Lego NXT kit, built for the interpretation of the robot, thereby can create this interaction in a constructive way for children suffering with Autism.

Keywords: lego NXT, interaction, BricX, autismo, ANN (Artificial Neural Network), MLP back propagation, hidden layers

Procedia PDF Downloads 549
2648 Potential Impacts of Warming Climate on Contributions of Runoff Components from Two Catchments of Upper Indus Basin, Karakoram, Pakistan

Authors: Syed Hammad Ali, Rijan Bhakta Kayastha, Ahuti Shrestha, Iram Bano

Abstract:

The hydrology of Upper Indus basin is not recognized well due to the intricacies in the climate and geography, and the scarcity of data above 5000 meters above sea level where most of the precipitation falls in the form of snow. The main objective of this study is to measure the contributions of different components of runoff in Upper Indus basin. To achieve this goal, the Modified positive degree-day model (MPDDM) was used to simulate the runoff and investigate its components in two catchments of Upper Indus basin, Hunza and Gilgit River basins. These two catchments were selected because of their different glacier coverage, contrasting area distribution at high altitudes and significant impact on the Upper Indus River flow. The components of runoff like snow-ice melt and rainfall-base flow were identified by the model. The simulation results show that the MPDDM shows a good agreement between observed and modeled runoff of these two catchments and the effects of snow-ice are mainly reliant on the catchment characteristics and the glaciated area. For Gilgit River basin, the largest contributor to runoff is rain-base flow, whereas large contribution of snow-ice melt observed in Hunza River basin due to its large fraction of glaciated area. This research will not only contribute to the better understanding of the impacts of climate change on the hydrological response in the Upper Indus, but will also provide guidance for the development of hydropower potential, water resources management and offer a possible evaluation of future water quantity and availability in these catchments.

Keywords: future discharge projection, positive degree day, regional climate model, water resource management

Procedia PDF Downloads 337
2647 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 132
2646 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 50