Search results for: deep feed forward neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8788

Search results for: deep feed forward neural network

5308 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 248
5307 Ochratoxin-A in Traditional Meat Products from Croatian Households

Authors: Jelka Pleadin, Nina Kudumija, Ana Vulic, Manuela Zadravec, Tina Lesic, Mario Skrivanko, Irena Perkovic, Nada Vahcic

Abstract:

Products of animal origin, such as meat and meat products, can contribute to human mycotoxins’ intake coming as a result of either indirect transfer from farm animals exposed to naturally contaminated grains and feed (carry-over effects) or direct contamination with moulds or naturally contaminated spice mixtures used in meat production. Ochratoxin A (OTA) is mycotoxin considered to be of the outermost importance from the public health standpoint in connection with meat products. The aim of this study was to investigate the occurrence of OTA in different traditional meat products circulating on Croatian markets during 2018, produced by a large number of households situated in eastern and north Croatian regions using a variety of technologies. Concentrations of OTA were determined in traditional meat products (n = 70), including dry fermented sausages (Slavonian kulen, Slavonian sausage, Istrian sausage and domestic sausage; n = 28), dry-cured meat products (pancetta, pork rack and ham; n = 22) and cooked sausages (liver sausages, black pudding sausages and pate; n = 20). OTA was analyzed by use of quantitative screening immunoassay method (ELISA) and confirmed for positive samples (higher than the limit of detection) by liquid chromatography tandem mass spectrometry (LC-MS/MS) method. Whereas the bacon samples contaminated with OTA were not found, its level in dry fermented sausages ranged from 0.22 to 2.17 µg/kg and in dry-cured meat products from 0.47 to 5.35 µg/kg, with in total 9% of positive samples. Besides possible primary contamination of these products arising due to improper manufacturing or/and storage conditions, observed OTA contamination could also be the consequence of secondary contamination that comes as a result of contaminated feed the animals were fed on. OTA levels obtained in cooked sausages ranged from 0.32 to 4.12 µg/kg (5% of positives) and could probably be linked to the contaminated raw materials (liver, kidney and spices) used in the sausages production. The results showed an occasional OTA contamination of traditional meat products, pointing that to avoid such contamination on households these products should be produced and processed under standardized and well-controlled conditions. Further investigations should be performed in order to identify mycotoxin-producing moulds on the surface of the products and to define preventative measures that can reduce the contamination of traditional meat products during their production on households and period of storage.

Keywords: Croatian households, ochratoxin-A, traditional cooked sausages, traditional dry-cured meat products

Procedia PDF Downloads 193
5306 Supply Chain Optimisation through Geographical Network Modeling

Authors: Cyrillus Prabandana

Abstract:

Supply chain optimisation requires multiple factors as consideration or constraints. These factors are including but not limited to demand forecasting, raw material fulfilment, production capacity, inventory level, facilities locations, transportation means, and manpower availability. By knowing all manageable factors involved and assuming the uncertainty with pre-defined percentage factors, an integrated supply chain model could be developed to manage various business scenarios. This paper analyse the utilisation of geographical point of view to develop an integrated supply chain network model to optimise the distribution of finished product appropriately according to forecasted demand and available supply. The supply chain optimisation model shows that small change in one supply chain constraint is possible to largely impact other constraints, and the new information from the model should be able to support the decision making process. The model was focused on three areas, i.e. raw material fulfilment, production capacity and finished products transportation. To validate the model suitability, it was implemented in a project aimed to optimise the concrete supply chain in a mining location. The high level of operations complexity and involvement of multiple stakeholders in the concrete supply chain is believed to be sufficient to give the illustration of the larger scope. The implementation of this geographical supply chain network modeling resulted an optimised concrete supply chain from raw material fulfilment until finished products distribution to each customer, which indicated by lower percentage of missed concrete order fulfilment to customer.

Keywords: decision making, geographical supply chain modeling, supply chain optimisation, supply chain

Procedia PDF Downloads 346
5305 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 149
5304 A Model Based Metaheuristic for Hybrid Hierarchical Community Structure in Social Networks

Authors: Radhia Toujani, Jalel Akaichi

Abstract:

In recent years, the study of community detection in social networks has received great attention. The hierarchical structure of the network leads to the emergence of the convergence to a locally optimal community structure. In this paper, we aim to avoid this local optimum in the introduced hybrid hierarchical method. To achieve this purpose, we present an objective function where we incorporate the value of structural and semantic similarity based modularity and a metaheuristic namely bees colonies algorithm to optimize our objective function on both hierarchical level divisive and agglomerative. In order to assess the efficiency and the accuracy of the introduced hybrid bee colony model, we perform an extensive experimental evaluation on both synthetic and real networks.

Keywords: social network, community detection, agglomerative hierarchical clustering, divisive hierarchical clustering, similarity, modularity, metaheuristic, bee colony

Procedia PDF Downloads 379
5303 Effect of Graded Level of Nano Selenium Supplementation on the Performance of Broiler Chicken

Authors: Raj Kishore Swain, Kamdev Sethy, Sumanta Kumar Mishra

Abstract:

Selenium is an essential trace element for the chicken with a variety of biological functions like growth, fertility, immune system, hormone metabolism, and antioxidant defense systems. Selenium deficiency in chicken causes exudative diathesis, pancreatic dystrophy and nutritional muscle dystrophy of the gizzard, heart and skeletal muscle. Additionally, insufficient immunity, lowering of production ability, decreased feathering of chickens and increased embryo mortality may occur due to selenium deficiency. Nano elemental selenium, which is bright red, highly stable, soluble and of nano meter size in the redox state of zero, has high bioavailability and low toxicity due to the greater surface area, high surface activity, high catalytic efficiency and strong adsorbing ability. To assess the effect of dietary nano-Se on performance and expression of gene in Vencobb broiler birds in comparison to its inorganic form (sodium selenite), four hundred fifty day-old Vencobb broiler chicks were randomly distributed into 9 dietary treatment groups with two replicates with 25 chicks per replicate. The dietary treatments were: T1 (Control group): Basal diet; T2: Basal diet with 0.3 ppm of inorganic Se; T3: Basal diet with 0.01875 ppm of nano-Se; T4: Basal diet with 0.0375 ppm of nano-Se; T5: Basal diet with 0.075 ppm of nano-Se, T6: Basal diet with 0.15 ppm of nano-Se, T7: Basal diet with 0.3 ppm of nano-Se, T8: Basal diet with 0.60 ppm of nano-Se, T9: Basal diet with 1.20 ppm of nano-Se. Nano selenium was synthesized by mixing sodium selenite with reduced glutathione and bovine serum albumin. The experiment was carried out in two phases: starter phase (0-3 wks), finisher phase (4-5 wk) in deep litter system. The body weight at the 5th week was best observed in T4. The best feed conversion ratio at the end of 5th week was observed in T4. Erythrocytic catalase, glutathione peroxidase and superoxide dismutase activity were significantly (P < 0.05) higher in all the nano selenium treated groups at 5th week. The antibody titers (log2) against Ranikhet diseases vaccine immunization of 5th-week broiler birds were significantly higher (P < 0.05) in the treatments T4 to T7. The selenium levels in liver, breast, kidney, brain, and gizzard were significantly (P < 0.05) increased with increasing dietary nano-Se indicating higher bioavailability of nano-Se compared to inorganic Se. The real time polymer chain reaction analysis showed an increase in the expression of antioxidative gene in T4 and T7 group. Therefore, it is concluded that supplementation of nano-selenium at 0.0375 ppm over and above the basal level can improve the body weight, antioxidant enzyme activity, Se bioavailability and expression of the antioxidative gene in broiler birds.

Keywords: chicken, growth, immunity, nano selenium

Procedia PDF Downloads 177
5302 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study

Authors: Faris Tarlochan, Siva Mahesh Tangutooru

Abstract:

Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.

Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses

Procedia PDF Downloads 278
5301 Opinion Mining and Sentiment Analysis on DEFT

Authors: Najiba Ouled Omar, Azza Harbaoui, Henda Ben Ghezala

Abstract:

Current research practices sentiment analysis with a focus on social networks, DEfi Fouille de Texte (DEFT) (Text Mining Challenge) evaluation campaign focuses on opinion mining and sentiment analysis on social networks, especially social network Twitter. It aims to confront the systems produced by several teams from public and private research laboratories. DEFT offers participants the opportunity to work on regularly renewed themes and proposes to work on opinion mining in several editions. The purpose of this article is to scrutinize and analyze the works relating to opinions mining and sentiment analysis in the Twitter social network realized by DEFT. It examines the tasks proposed by the organizers of the challenge and the methods used by the participants.

Keywords: opinion mining, sentiment analysis, emotion, polarity, annotation, OSEE, figurative language, DEFT, Twitter, Tweet

Procedia PDF Downloads 138
5300 A Study of Human Communication in an Internet Community

Authors: Andrew Laghos

Abstract:

The Internet is a big part of our everyday lives. People can now access the internet from a variety of places including home, college, and work. Many airports, hotels, restaurants and cafeterias, provide free wireless internet to their visitors. Using technologies like computers, tablets, and mobile phones, we spend a lot of our time online getting entertained, getting informed, and communicating with each other. This study deals with the latter part, namely, human communication through the Internet. People can communicate with each other using social media, social network sites (SNS), e-mail, messengers, chatrooms, and so on. By connecting with each other they form virtual communities. Regarding SNS, types of connections that can be studied include friendships and cliques. Analyzing these connections is important to help us understand online user behavior. The method of Social Network Analysis (SNA) was used on a case study, and results revealed the existence of some useful patterns of interactivity between the participants. The study ends with implications of the results and ideas for future research.

Keywords: human communication, internet communities, online user behavior, psychology

Procedia PDF Downloads 497
5299 Production of Biogas

Authors: J. O. Alabi

Abstract:

Biogas is a clean burning, easily produced natural fuel that is an important source of energy for cooking and heating in rural areas and third world countries. Anaerobic bacteria inside biodigesters break down biomass to produce biogas. (Which is 70% methane)? Currently there is no simple way to compress and store biogas. So, in order to use biogas as a source of energy, a direct feed from biodigeser to the store tap or heater must be made. Any excess biogas is vented into the atmosphere, which is wasteful and car have a negative effect on the environment, we have been tasked with designing a system that will be able to compress biogas using an off-grid power supply, making the biogas portable and makes through the use of large-scale, shared biodigester. Our final design is a system that maximizes simplicity and safety while minimizing cost.

Keywords: biogas, biodigesters, natural fuel, bionanotechnology

Procedia PDF Downloads 364
5298 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 167
5297 Effect of Fiber Inclusion on the Geotechnical Parameters of Clayey Soil Subjected to Freeze-Thaw Cycles

Authors: Arun Prasad, P. B. Ramudu, Deep Shikha, Deep Jyoti Singh

Abstract:

A number of studies have been conducted recently to investigate the influence of randomly oriented fibers on some engineering properties of cohesive soils.Freezing and thawing of soil affects the strength, durability and permeability of soil adversely. Experiments were carried out in order to investigate the effect of inclusion of randomly distributed polypropylene fibers on the strength, hydraulic conductivity and durability of local soil (CL) subjected to freeze–thaw cycles. For evaluating the change in strength of soil, a series of unconfined compression tests as well as tri-axial tests were carried out on reinforced and unreinforced soil samples. All the samples were subjected to seven cycles of freezing and thawing. Freezing was carried out at a temperature of - 15 to -18 °C; and thawing was carried out by keeping the samples at room temperature. The reinforcement of soil samples was done by mixing with polypropylene fibers, 12 mm long and with an aspect ratio of 240. The content of fibers was varied from 0.25 to 1% by dry weight of soil. The maximum strength of soil was found in samples having a fiber content of 0.75% for all the samples that were prepared at optimum moisture content (OMC), and if the OMC was increased (+2% OMC) or decreased (-2% OMC), the maximum strength observed at 0.5% fiber inclusion. The effect of fiber inclusion and freeze–thaw on the hydraulic conductivity was studied increased from around 25 times to 300 times that of the unreinforced soil, without subjected to any freeze-thaw cycles. For studying the increased durability of soil, mass loss after each freeze-thaw cycle was calculated and it was found that samples reinforced with polypropylene fibers show 50-60% less loss in weight than that of the unreinforced soil.

Keywords: fiber reinforcement, freezingand thawing, hydraulic conductivity, unconfined compressive strength

Procedia PDF Downloads 400
5296 Overview of Risk Management in Electricity Markets Using Financial Derivatives

Authors: Aparna Viswanath

Abstract:

Electricity spot prices are highly volatile under optimal generation capacity scenarios due to factors such as non-storability of electricity, peak demand at certain periods, generator outages, fuel uncertainty for renewable energy generators, huge investments and time needed for generation capacity expansion etc. As a result market participants are exposed to price and volume risk, which has led to the development of risk management practices. This paper provides an overview of risk management practices by market participants in electricity markets using financial derivatives.

Keywords: financial derivatives, forward, futures, options, risk management

Procedia PDF Downloads 479
5295 Multi-Sender MAC Protocol Based on Temporal Reuse in Underwater Acoustic Networks

Authors: Dongwon Lee, Sunmyeng Kim

Abstract:

Underwater acoustic networks (UANs) have become a very active research area in recent years. Compared with wireless networks, UANs are characterized by the limited bandwidth, long propagation delay and high channel dynamic in acoustic modems, which pose challenges to the design of medium access control (MAC) protocol. The characteristics severely affect network performance. In this paper, we study a MS-MAC (Multi-Sender MAC) protocol in order to improve network performance. The proposed protocol exploits temporal reuse by learning the propagation delays to neighboring nodes. A source node locally calculates the transmission schedules of its neighboring nodes and itself based on the propagation delays to avoid collisions. Performance evaluation is conducted using simulation, and confirms that the proposed protocol significantly outperforms the previous protocol in terms of throughput.

Keywords: acoustic channel, MAC, temporal reuse, UAN

Procedia PDF Downloads 348
5294 Detection of Aflatoxin B1 Producing Aspergillus flavus Genes from Maize Feed Using Loop-Mediated Isothermal Amplification (LAMP) Technique

Authors: Sontana Mimapan, Phattarawadee Wattanasuntorn, Phanom Saijit

Abstract:

Aflatoxin contamination in maize, one of several agriculture crops grown for livestock feeding, is still a problem throughout the world mainly under hot and humid weather conditions like Thailand. In this study Aspergillus flavus (A. Flavus), the key fungus for aflatoxin production especially aflatoxin B1 (AFB1), isolated from naturally infected maize were identified and characterized according to colony morphology and PCR using ITS, Beta-tubulin and calmodulin genes. The strains were analysed for the presence of four aflatoxigenic biosynthesis genes in relation to their capability to produce AFB1, Ver1, Omt1, Nor1, and aflR. Aflatoxin production was then confirmed using immunoaffinity column technique. A loop-mediated isothermal amplification (LAMP) was applied as an innovative technique for rapid detection of target nucleic acid. The reaction condition was optimized at 65C for 60 min. and calcein flurescent reagent was added before amplification. The LAMP results showed clear differences between positive and negative reactions in end point analysis under daylight and UV light by the naked eye. In daylight, the samples with AFB1 producing A. Flavus genes developed a yellow to green color, but those without the genes retained the orange color. When excited with UV light, the positive samples become visible by bright green fluorescence. LAMP reactions were positive after addition of purified target DNA until dilutions of 10⁻⁶. The reaction products were then confirmed and visualized with 1% agarose gel electrophoresis. In this regards, 50 maize samples were collected from dairy farms and tested for the presence of four aflatoxigenic biosynthesis genes using LAMP technique. The results were positive in 18 samples (36%) but negative in 32 samples (64%). All of the samples were rechecked by PCR and the results were the same as LAMP, indicating 100% specificity. Additionally, when compared with the immunoaffinity column-based aflatoxin analysis, there was a significant correlation between LAMP results and aflatoxin analysis (r= 0.83, P < 0.05) which suggested that positive maize samples were likely to be a high- risk feed. In conclusion, the LAMP developed in this study can provide a simple and rapid approach for detecting AFB1 producing A. Flavus genes from maize and appeared to be a promising tool for the prediction of potential aflatoxigenic risk in livestock feedings.

Keywords: Aflatoxin B1, Aspergillus flavus genes, maize, loop-mediated isothermal amplification

Procedia PDF Downloads 240
5293 Impact of Transportation on the Economic Growth of Nigeria

Authors: E. O. E. Nnadi

Abstract:

Transportation is a critical factor in the economic growth and development of any nation, region or state. Good transportation network supports every sector of the economy like the manufacturing, transportation and encourages investors thereby affect the overall economic prosperity. The paper evaluates the impact of transportation on the economic growth of Nigeria using south eastern states as a case study. The choice of the case study is its importance as the commercial and industrial nerve of the country. About 200 respondents who are of different professions such as dealers in goods, transporters, contractors, consultants, bankers were selected and a set of questionnaire were administered to using the systematic sampling technique in the five states of the region. Descriptive statistics and relative importance index (RII) technique was employed for the analysis of the data gathered. The findings of the analysis reveal that Nigeria has the least effective ratio per population in Africa of 949.91 km/Person. Conclusion was drawn to improve road network in the area and the country as a whole to enhance the economic activities of the people.

Keywords: economic growth, south-east, transportation, transportation cost, Nigeria

Procedia PDF Downloads 273
5292 Coal Mining Safety Monitoring Using Wsn

Authors: Somdatta Saha

Abstract:

The main purpose was to provide an implementable design scenario for underground coal mines using wireless sensor networks (WSNs). The main reason being that given the intricacies in the physical structure of a coal mine, only low power WSN nodes can produce accurate surveillance and accident detection data. The work mainly concentrated on designing and simulating various alternate scenarios for a typical mine and comparing them based on the obtained results to arrive at a final design. In the Era of embedded technology, the Zigbee protocols are used in more and more applications. Because of the rapid development of sensors, microcontrollers, and network technology, a reliable technological condition has been provided for our automatic real-time monitoring of coal mine. The underground system collects temperature, humidity and methane values of coal mine through sensor nodes in the mine; it also collects the number of personnel inside the mine with the help of an IR sensor, and then transmits the data to information processing terminal based on ARM.

Keywords: ARM, embedded board, wireless sensor network (Zigbee)

Procedia PDF Downloads 340
5291 A Study of Adult Lifelong Learning Consulting and Service System in Taiwan

Authors: Wan Jen Chang

Abstract:

Back ground: Taiwan's current adult lifelong learning services have expanded from vocational training to universal lifelong learning. However, both the professional knowledge training of learning guidance and consulting services and the provision of adult online learning consulting service systems still need to be established. Purpose: The purposes of this study are as follows: 1. Analyze the professional training mechanism for cultivating adult lifelong learning consultation and coaching; 2. Explore the feasibility of constructing a system that uses network technology to provide adult learning consultation services. Research design: This study conducts a literature analysis of counseling and coaching policy reports on lifelong learning in European countries and the United States. There are two focus discussions were conducted with 15 lifelong learning scholars, experts and practitioners as research subjects. The following two topics were discussed and suggested: 1. The current situation, needs and professional ability training mechanism of "Adult Lifelong Learning Consulting and Services"; 2. Strategies for establishing an "Adult Lifelong Learning Consulting and Service internet System". Conclusion: 1.Based on adult lifelong learning consulting and service needs, plan a professional knowledge training and certification system.2.Adult lifelong learning consulting and service professional knowledge and skills training should include the use of network technology to provide consulting service skills.3.To establish an adult lifelong learning consultation and service system, the Ministry of Education should promulgate policies and measures at the central level and entrust local governments or private organizations to implement them.4.The adult lifelong learning consulting and service system can combine the national qualifications framework, private sector and NPO to expand learning consulting service partners.

Keywords: adult lifelong learning, profesional knowledge, consulting and service, network system

Procedia PDF Downloads 67
5290 Presenting a Job Scheduling Algorithm Based on Learning Automata in Computational Grid

Authors: Roshanak Khodabakhsh Jolfaei, Javad Akbari Torkestani

Abstract:

As a cooperative environment for problem-solving, it is necessary that grids develop efficient job scheduling patterns with regard to their goals, domains and structure. Since the Grid environments facilitate distributed calculations, job scheduling appears in the form of a critical problem for the management of Grid sources that influences severely on the efficiency for the whole Grid environment. Due to the existence of some specifications such as sources dynamicity and conditions of the network in Grid, some algorithm should be presented to be adjustable and scalable with increasing the network growth. For this purpose, in this paper a job scheduling algorithm has been presented on the basis of learning automata in computational Grid which the performance of its results were compared with FPSO algorithm (Fuzzy Particle Swarm Optimization algorithm) and GJS algorithm (Grid Job Scheduling algorithm). The obtained numerical results indicated the superiority of suggested algorithm in comparison with FPSO and GJS. In addition, the obtained results classified FPSO and GJS in the second and third position respectively after the mentioned algorithm.

Keywords: computational grid, job scheduling, learning automata, dynamic scheduling

Procedia PDF Downloads 343
5289 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
5288 Optimization of Machining Parameters by Using Cryogenic Media

Authors: Shafqat Wahab, Waseem Tahir, Manzoor Ahmad, Sarfraz Khan, M. Azam

Abstract:

Optimization and analysis of tool flank wear width and surface finish of alloy steel rods are studied in the presence of cryogenic media (LN2) by using Tungsten Carbide Insert (CNMG 120404- WF 4215). Robust design concept of Taguchi L9(34) method and ANOVA is applied to determine the contribution of key cutting parameters and their optimum conditions. Through analysis, it revealed that cryogenic impact is more significant in reduction of the tool flank wear width while surface finish is mostly dependent on feed rate.

Keywords: turning, cryogenic fluid, liquid nitrogen, flank wear, surface roughness, taguchi

Procedia PDF Downloads 666
5287 Genome-Wide Expression Profiling of Cicer arietinum Heavy Metal Toxicity

Authors: B. S. Yadav, A. Mani, S. Srivastava

Abstract:

Chickpea (Cicer arietinum L.) is an annual, self-pollinating, diploid (2n = 2x = 16) pulse crop that ranks second in world legume production after common bean (Phaseolus vulgaris). ICC 4958 flowers approximately 39 days after sowing under peninsular Indian conditions and the crop matures in less than 90 days in rained environments. The estimated collective yield losses due to abiotic stresses (6.4 million t) have been significantly higher than for biotic stresses (4.8 million t). Most legumes are known to be salt sensitive, and therefore, it is becoming increasingly important to produce cultivars tolerant to high-salinity in addition to other abiotic and biotic stresses for sustainable chickpea production. Our aim was to identify the genes that are involved in the defence mechanism against heavy metal toxicity in chickpea and establish the biological network of heavy metal toxicity in chickpea. ICC4958 variety of chick pea was taken and grown in normal condition and 150µM concentration of different heavy metal salt like CdCl₂, K₂Cr2O₇, NaAsO₂. At 15th day leave samples were collected and stored in RNA Later solution microarray was performed for checking out differential gene expression pattern. Our studies revealed that 111 common genes that involved in defense mechanism were up regulated and 41 genes were commonly down regulated during treatment of 150µM concentration of CdCl₂, K₂Cr₂O₇, and NaAsO₂. Biological network study shows that the genes which are differentially expressed are highly connected and having high betweenness and centrality.

Keywords: abiotic stress, biological network, chickpea, microarray

Procedia PDF Downloads 197
5286 Integration of the Electro-Activation Technology for Soy Meal Valorization

Authors: Natela Gerliani, Mohammed Aider

Abstract:

Nowadays, the interest of using sustainable technologies for protein extraction from underutilized oilseeds is growing. Currently, a major disposal problem for the oil industry is by-products of plant food processing such as soybean meal. That is why valorization of soybean meal is important for the oil industry since it contains high-quality proteins and other valuable components. Generally, soybean meal is used in livestock and poultry feed but is rarely used in human feed. Though chemical composition of this meal compensate nutritional deficiency and can be used to balance protein in human food. Regarding the efficiency of soybean meal valorization, extraction is a key process for obtaining enriched protein ingredient, which can be incorporated into the food matrix. However, most of the food components such as proteins extracted from oilseeds by-products imply the utilization of organic and inorganic chemicals (e.g. acids, bases, TCA-acetone) having a significant environmental impact. In a context of sustainable production, the use of an electro-activation technology seems to be a good alternative. Indeed, the electro-activation technology requires only water, food grade salt and electricity as main materials. Moreover, this innovative technology helps to avoid special equipment and trainings for workers safety as well as transport and storage of hazardous materials. Electro-activation is a technology based on applied electrochemistry for the generation of acidic and alkaline solutions on the basis of the oxidation-reduction reactions that occur at the vicinity electrode/solution interfaces. It is an eco-friendly process that can be used to replace the conventional acidic and alkaline extraction. In this research, the electro-activation technology for protein extraction from soybean meal was carried out in the electro-activation reactor. This reactor consists of three compartments separated by cation and anion exchange membranes that allow creating non-contacting acidic and basic solutions. Different current intensities (150 mA, 300 mA and 450 mA) and treatment durations (10 min, 30 min and 50 min) were tested. The results showed that the extracts obtained by the electro-activation method have good quality in comparison to conventional extracts. For instance, extractability obtained with electro-activation method was 55% whereas with the conventional method it was only 36%. Moreover, a maximum protein quantity of 48 % in the extract was obtained with the electro-activation technology comparing to the maximum amount of protein obtained by conventional extraction of 41 %. Hence, the environmentally sustainable electro-activation technology seems to be a promising type of protein extraction that can replace conventional extraction technology.

Keywords: by-products, eco-friendly technology, electro-activation, soybean meal

Procedia PDF Downloads 228
5285 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 42
5284 Estimation of Endogenous Brain Noise from Brain Response to Flickering Visual Stimulation Magnetoencephalography Visual Perception Speed

Authors: Alexander N. Pisarchik, Parth Chholak

Abstract:

Intrinsic brain noise was estimated via magneto-encephalograms (MEG) recorded during perception of flickering visual stimuli with frequencies of 6.67 and 8.57 Hz. First, we measured the mean phase difference between the flicker signal and steady-state event-related field (SSERF) in the occipital area where the brain response at the flicker frequencies and their harmonics appeared in the power spectrum. Then, we calculated the probability distribution of the phase fluctuations in the regions of frequency locking and computed its kurtosis. Since kurtosis is a measure of the distribution’s sharpness, we suppose that inverse kurtosis is related to intrinsic brain noise. In our experiments, the kurtosis value varied among subjects from K = 3 to K = 5 for 6.67 Hz and from 2.6 to 4 for 8.57 Hz. The majority of subjects demonstrated leptokurtic kurtosis (K < 3), i.e., the distribution tails approached zero more slowly than Gaussian. In addition, we found a strong correlation between kurtosis and brain complexity measured as the correlation dimension, so that the MEGs of subjects with higher kurtosis exhibited lower complexity. The obtained results are discussed in the framework of nonlinear dynamics and complex network theories. Specifically, in a network of coupled oscillators, phase synchronization is mainly determined by two antagonistic factors, noise, and the coupling strength. While noise worsens phase synchronization, the coupling improves it. If we assume that each neuron and each synapse contribute to brain noise, the larger neuronal network should have stronger noise, and therefore phase synchronization should be worse, that results in smaller kurtosis. The described method for brain noise estimation can be useful for diagnostics of some brain pathologies associated with abnormal brain noise.

Keywords: brain, flickering, magnetoencephalography, MEG, visual perception, perception time

Procedia PDF Downloads 148
5283 The Friendship Network Stability of Preschool Children during One Pedagogical Season

Authors: Yili Wang, Jarmo Kinos, Tuire Palonen, Tarja-Riitta Hurme

Abstract:

This longitudinal study aims to examine how five- and six-year-old children’s peer relationships are formed and fostered during one preschool year in a southwestern Finnish preschool. All 16 kindergarteners participated in the study (at dyad level N=240; i.e., 16 x 15 relationships among the children). The children were divided into four daily groups, based on the table order during the daily routines, and four intervention groups, based on the teachers’ pedagogical plan. During the intervention, one iPad was given to each group in order to stimulate interaction among peers and, thus, enable the children to form new peer relationships. In the data gathering, sociometric nomination techniques were used to investigate the nature (i.e., stability and mutuality) of the peer relationships. The data was collected five times during the year to see what kind of peer relationship changes occurred at the dyad level and the group level, i.e., in establishing and losing friendship ties among the children. Social network analyses were used to analyze the data. The results indicate that the children’s preference for gender segregation was strong compared to age preference and intervention. In all, the number of reciprocal friendship ties and the mutual absence of friendship ties increased towards the end of the year, whereas the number of unilateral friendship ties decreased. This indicates that children’s nominations narrow down; thus, the group structure becomes more crystalized. Instead of extending their friendship networks, children seek stable and mutual relationships with their peers in their middle childhood years. The intervention only had a slightly negative influence on children’s peer relationships.

Keywords: intervention study, peer relationship, preschool education, social network analysis, sociometric ratings

Procedia PDF Downloads 273
5282 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 243
5281 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set

Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques

Procedia PDF Downloads 416
5280 Multi-source Question Answering Framework Using Transformers for Attribute Extraction

Authors: Prashanth Pillai, Purnaprajna Mangsuli

Abstract:

Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.

Keywords: natural language processing, deep learning, transformers, information retrieval

Procedia PDF Downloads 193
5279 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 402