Search results for: fractional programming
200 Contribution of Spatial Teledetection to the Geological Mapping of the Imiter Buttonhole: Application to the Mineralized Structures of the Principal Corps B3 (CPB3) of the Imiter Mine (Anti-atlas, Morocco)
Authors: Bouayachi Ali, Alikouss Saida, Baroudi Zouhir, Zerhouni Youssef, Zouhair Mohammed, El Idrissi Assia, Essalhi Mourad
Abstract:
The world-class Imiter silver deposit is located on the northern flank of the Precambrian Imiter buttonhole. This deposit is formed by epithermal veins hosted in the sandstone-pelite formations of the lower complex and in the basic conglomerates of the upper complex, these veins are controlled by a regional scale fault cluster, oriented N70°E to N90°E. The present work on the contribution of remote sensing on the geological mapping of the Imiter buttonhole and application to the mineralized structures of the Principal Corps B3. Mapping on satellite images is a very important tool in mineral prospecting. It allows the localization of the zones of interest in order to orientate the field missions by helping the localization of the major structures which facilitates the interpretation, the programming and the orientation of the mining works. The predictive map also allows for the correction of field mapping work, especially the direction and dimensions of structures such as dykes, corridors or scrapings. The use of a series of processing such as SAM, PCA, MNF and unsupervised and supervised classification on a Landsat 8 satellite image of the study area allowed us to highlight the main facies of the Imite area. To improve the exploration research, we used another processing that allows to realize a spatial distribution of the alteration mineral indices, and the application of several filters on the different bands to have lineament maps.Keywords: principal corps B3, teledetection, Landsat 8, Imiter II, silver mineralization, lineaments
Procedia PDF Downloads 96199 Using the SMT Solver to Minimize the Latency and to Optimize the Number of Cores in an NoC-DSP Architectures
Authors: Imen Amari, Kaouther Gasmi, Asma Rebaya, Salem Hasnaoui
Abstract:
The problem of scheduling and mapping data flow applications on multi-core architectures is notoriously difficult. This difficulty is related to the rapid evaluation of Telecommunication and multimedia systems accompanied by a rapid increase of user requirements in terms of latency, execution time, consumption, energy, etc. Having an optimal scheduling on multi-cores DSP (Digital signal Processors) platforms is a challenging task. In this context, we present a novel technic and algorithm in order to find a valid schedule that optimizes the key performance metrics particularly the Latency. Our contribution is based on Satisfiability Modulo Theories (SMT) solving technologies which is strongly driven by the industrial applications and needs. This paper, describe a scheduling module integrated in our proposed Workflow which is advised to be a successful approach for programming the applications based on NoC-DSP platforms. This workflow transform automatically a Simulink model to a synchronous dataflow (SDF) model. The automatic transformation followed by SMT solver scheduling aim to minimize the final latency and other software/hardware metrics in terms of an optimal schedule. Also, finding the optimal numbers of cores to be used. In fact, our proposed workflow taking as entry point a Simulink file (.mdl or .slx) derived from embedded Matlab functions. We use an approach which is based on the synchronous and hierarchical behavior of both Simulink and SDF. Whence, results of running the scheduler which exist in the Workflow mentioned above using our proposed SMT solver algorithm refinements produce the best possible scheduling in terms of latency and numbers of cores.Keywords: multi-cores DSP, scheduling, SMT solver, workflow
Procedia PDF Downloads 288198 The Importance of Visual Communication in Artificial Intelligence
Authors: Manjitsingh Rajput
Abstract:
Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.
Procedia PDF Downloads 97197 An MIPSSTWM-based Emergency Vehicle Routing Approach for Quick Response to Highway Incidents
Authors: Siliang Luan, Zhongtai Jiang
Abstract:
The risk of highway incidents is commonly recognized as a major concern for transportation authorities due to the hazardous consequences and negative influence. It is crucial to respond to these unpredictable events as soon as possible faced by emergency management decision makers. In this paper, we focus on path planning for emergency vehicles, one of the most significant processes to avoid congestion and reduce rescue time. A Mixed-Integer Linear Programming with Semi-Soft Time Windows Model (MIPSSTWM) is conducted to plan an optimal routing respectively considering the time consumption of arcs and nodes of the urban road network and the highway network, especially in developing countries with an enormous population. Here, the arcs indicate the road segments and the nodes include the intersections of the urban road network and the on-ramp and off-ramp of the highway networks. An attempt in this research has been made to develop a comprehensive and executive strategy for emergency vehicle routing in heavy traffic conditions. The proposed Cuckoo Search (CS) algorithm is designed by imitating obligate brood parasitic behaviors of cuckoos and Lévy Flights (LF) to solve this hard and combinatorial problem. Using a Chinese city as our case study, the numerical results demonstrate the approach we applied in this paper outperforms the previous method without considering the nodes of the road network for a real-world situation. Meanwhile, the accuracy and validity of the CS algorithm also show better performances than the traditional algorithm.Keywords: emergency vehicle, path planning, cs algorithm, urban traffic management and urban planning
Procedia PDF Downloads 82196 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System
Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi
Abstract:
Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.Keywords: RFID, asset tracking system, MongoDB, NoSQL
Procedia PDF Downloads 307195 Pavement Failures and Its Maintenance
Authors: Maulik L. Sisodia, Tirth K. Raval, Aarsh S. Mistry
Abstract:
This paper summarizes the ongoing researches about the defects in both flexible and rigid pavement and the maintenance in both flexible and rigid pavements. Various defects in pavements have been identified since the existence of both flexible and rigid pavement. Flexible Pavement failure is defined in terms of decreasing serviceability caused by the development of cracks, ruts, potholes etc. Flexible Pavement structure can be destroyed in a single season due to water penetration. Defects in flexible pavements is a problem of multiple dimensions, phenomenal growth of vehicular traffic (in terms of no. of axle loading of commercial vehicles), the rapid expansion in the road network, non-availability of suitable technology, material, equipment, skilled labor and poor funds allocation have all added complexities to the problem of flexible pavements. In rigid pavements due to different type of destress the failure like joint spalling, faulting, shrinkage cracking, punch out, corner break etc. Application of correction in the existing surface will enhance the life of maintenance works as well as that of strengthening layer. Maintenance of a road network involves a variety of operations, i.e., identification of deficiencies and planning, programming and scheduling for actual implementation in the field and monitoring. The essential objective should be to keep the road surface and appurtenances in good condition and to extend the life of the road assets to its design life. The paper describes lessons learnt from pavement failures and problems experienced during the last few years on a number of projects in India. Broadly, the activities include identification of defects and the possible cause there off, determination of appropriate remedial measures; implement these in the field and monitoring of the results.Keywords: Flexible Pavements, Rigid Pavements, Defects, Maintenance
Procedia PDF Downloads 174194 Production and Distribution Network Planning Optimization: A Case Study of Large Cement Company
Authors: Lokendra Kumar Devangan, Ajay Mishra
Abstract:
This paper describes the implementation of a large-scale SAS/OR model with significant pre-processing, scenario analysis, and post-processing work done using SAS. A large cement manufacturer with ten geographically distributed manufacturing plants for two variants of cement, around 400 warehouses serving as transshipment points, and several thousand distributor locations generating demand needed to optimize this multi-echelon, multi-modal transport supply chain separately for planning and allocation purposes. For monthly planning as well as daily allocation, the demand is deterministic. Rail and road networks connect any two points in this supply chain, creating tens of thousands of such connections. Constraints include the plant’s production capacity, transportation capacity, and rail wagon batch size constraints. Each demand point has a minimum and maximum for shipments received. Price varies at demand locations due to local factors. A large mixed integer programming model built using proc OPTMODEL decides production at plants, demand fulfilled at each location, and the shipment route to demand locations to maximize the profit contribution. Using base SAS, we did significant pre-processing of data and created inputs for the optimization. Using outputs generated by OPTMODEL and other processing completed using base SAS, we generated several reports that went into their enterprise system and created tables for easy consumption of the optimization results by operations.Keywords: production planning, mixed integer optimization, network model, network optimization
Procedia PDF Downloads 71193 Design and Integration of a Renewable Energy Based Polygeneration System with Desalination for an Industrial Plant
Authors: Lucero Luciano, Cesar Celis, Jose Ramos
Abstract:
Polygeneration improves energy efficiency and reduce both energy consumption and pollutant emissions compared to conventional generation technologies. A polygeneration system is a variation of a cogeneration one, in which more than two outputs, i.e., heat, power, cooling, water, energy or fuels, are accounted for. In particular, polygeneration systems integrating solar energy and water desalination represent promising technologies for energy production and water supply. They are therefore interesting options for coastal regions with a high solar potential, such as those located in southern Peru and northern Chile. Notice that most of the Peruvian and Chilean mining industry operations intensive in electricity and water consumption are located in these particular regions. Accordingly, this work focus on the design and integration of a polygeneration system producing industrial heating, cooling, electrical power and water for an industrial plant. The design procedure followed in this work involves integer linear programming modeling (MILP), operational planning and dynamic operating conditions. The technical and economic feasibility of integrating renewable energy technologies (photovoltaic and solar thermal, PV+CPS), thermal energy store, power and thermal exchange, absorption chillers, cogeneration heat engines and desalination technologies is particularly assessed. The polygeneration system integration carried out seek to minimize the system total annual cost subject to CO2 emissions restrictions. Particular economic aspects accounted for include investment, maintenance and operating costs.Keywords: desalination, design and integration, polygeneration systems, renewable energy
Procedia PDF Downloads 126192 Optimization Method of the Number of Berth at Bus Rapid Transit Stations Based on Passenger Flow Demand
Authors: Wei Kunkun, Cao Wanyang, Xu Yujie, Qiao Yuzhi, Liu Yingning
Abstract:
The reasonable design of bus parking spaces can improve the traffic capacity of the station and reduce traffic congestion. In order to reasonably determine the number of berths at BRT (Bus Rapid Transit) stops, it is based on the actual bus rapid transit station observation data, scheduling data, and passenger flow data. Optimize the number of station berths from the perspective of optimizing the balance of supply and demand at the site. Combined with the classical capacity calculation model, this paper first analyzes the important factors affecting the traffic capacity of BRT stops by using SPSS PRO and MATLAB programming software, namely the distribution of BRT stops and the distribution of BRT stop time. Secondly, the method of calculating the number of the classic human capital management (HCM) model is optimized based on the actual passenger demand of the station, and the method applicable to the actual number of station berths is proposed. Taking Gangding Station of Zhongshan Avenue Bus Rapid Transit Corridor in Guangzhou as an example, based on the calculation method proposed in this paper, the number of berths of sub-station 1, sub-station 2 and sub-station 3 is 2, which reduces the road space of the station by 33.3% compared with the previous berth 3 of each sub-station, and returns to social vehicles. Therefore, under the condition of ensuring the passenger flow demand of BRT stations, the road space of the station is reduced, and the road is returned to social vehicles, the traffic capacity of social vehicles is improved, and the traffic capacity and efficiency of the BRT corridor system are improved as a whole.Keywords: urban transportation, bus rapid transit station, HCM model, capacity, number of berths
Procedia PDF Downloads 95191 Contraceptive Uptake among Women in Low Socio-Economic Areas in Kenya: Quantitative Analysis of Secondary Data
Authors: J. Waita, S. Wamuhu, J. Makoyo, M. Rachel, T. Ngangari, W. Christine, M. Zipporah
Abstract:
Contraceptive use is one of the key global strategies to alleviate maternal mortality. Global efforts through advocating for contraceptive uptake and service provision has led improved contraceptive prevalence. In Kenya maternal mortality rate has remained a challenged despites efforts by government and non-governmental organizations. Objective: To describe the uptake of contraceptives among women in Tunza Clinics, Kenya. Design and Methods: Ps Kenya through health care marketing fund is implementing a family planning program among its 350 Tunza fractional franchise facilities. Through private partnership, private owned facilities in low socio-economic areas are recruited and trained on contraceptive technology update. The providers are supported through facilitative supervision through a mobile based application Health Network Quality Improvement System (HNQIS) and interpersonal communication through 150 community based volunteers. The data analyzed in this paper was collected between January to July 2017 to show the uptake of modern Contraceptives among women in the Tunza franchise, method mix, age and distribution among the age bracket. Further analysis compares two different service delivery strategies; outreach and walk ins. Supportive supervision HNQIS scores was analyzed. Results: During the time period, a total of 132121 family planning clients were attended in 350 facilities. The average age of clients was 29.6 years. The average number of clients attended in the facilities per month was 18874. 73.7 %( n=132121) of the clients attended in the Tunza facilities were aged above 25 years while 22.1% 20-24 years and 4.2% 15-19 years. On contraceptive method mix, intra uterine device insertions clients contributed to 7.5%, implant insertions 15.3%, pills 11.2%, injections 62.7% while condoms and emergency pills had 2.7% and 0.6% respectively. Analysis of service delivery strategy indicated more than 79% of the clients were walk ins while 21% were attended to during outreaches. Uptake of long term contraceptive methods during outreaches was 73% of the clients while short term modern methods were 27%. Health Network Quality Improvement system assessment scores indicated 51% of the facilities scored over 90%, 25% scoring 80-89% while 21% scored below 80%. Conclusion: Preference for short term methods by women is possibly associated to cost as they are cheaper and easy to administer. When the cost of intra uterine device Implants is meant affordable during outreaches, the uptake is observed to increase. Making intra uterine device and implants affordable to women is a key strategy in increasing contraceptive prevalence hence averting maternal mortality.Keywords: contraceptives, contraceptive uptake, low socio economic, supportive supervision
Procedia PDF Downloads 169190 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving
Authors: Yasin Tadayonrad
Abstract:
Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming
Procedia PDF Downloads 92189 Analysis study According Some of Physical and Mechanical Variables for Joint Wrist Injury
Authors: Nabeel Abdulkadhim Athab
Abstract:
The purpose of this research is to conduct a comparative study according analysis of programmed to some of physical and mechanical variables for joint wrist injury. As it can be through this research to distinguish between the amount of variation in the work of the joint after sample underwent rehabilitation program to improve the effectiveness of the joint and naturally restore its effectiveness. Supposed researcher that there is statistically significant differences between the results of the tests pre and post the members research sample, as a result of submission the sample to the program of rehabilitation, which led to the development of muscle activity that are working on wrist joint and this is what led to note the differences between the results of the tests pre and post. The researcher used the descriptive method. The research sample included (6) of injured players in the wrist joint, as the average age (21.68) and standard deviation (1.13) either length average (178cm) and standard deviation (2.08). And the sample as evidenced homogeneous among themselves. And where the data were collected, introduced in program for statistical processing to get to the most important conclusions and recommendations and that the most important: 1-The commitment of the sample program the qualifying process variables studied in the search for the heterogeneity of study activity and effectiveness of wrist joint for injured players. 2-The analysis programmed a high accuracy in the measurement of the research variables, and which led to the possibility of discrimination into account differences in motor ability camel and injured in the wrist joint. To search recommendations including: 1-The use of computer systems in the scientific research for the possibility of obtaining accurate research results. 2-Programming exercises rehabilitation according to an expert system for possible use by patients without reference to the person processor.Keywords: analysis of joint wrist injury, physical and mechanical variables, wrist joint, wrist injury
Procedia PDF Downloads 431188 Estimation of Particle Number and Mass Doses Inhaled in a Busy Street in Lublin, Poland
Authors: Bernard Polednik, Adam Piotrowicz, Lukasz Guz, Marzenna Dudzinska
Abstract:
Transportation is considered to be responsible for increased exposure of road users – i.e., drivers, car passengers, and pedestrians as well as inhabitants of houses located near roads - to pollutants emitted from vehicles. Accurate estimates are, however, difficult as exposure depends on many factors such as traffic intensity or type of fuel as well as the topography and the built-up area around the individual routes. The season and weather conditions are also of importance. In the case of inhabitants of houses located near roads, their exposure depends on the distance from the road, window tightness and other factors that decrease pollutant infiltration. This work reports the variations of particle concentrations along a selected road in Lublin, Poland. Their impact on the exposure for road users as well as for inhabitants of houses located near the road is also presented. Mobile and fixed-site measurements were carried out in peak (around 8 a.m. and 4 p.m.) and off-peak (12 a.m., 4 a.m., and 12 p.m.) traffic times in all 4 seasons. Fixed-site measurements were performed in 12 measurement points along the route. The number and mass concentration of particles was determined with the use of P-Trak model 8525, OPS 3330, DustTrak DRX model 8533 (TSI Inc. USA) and Grimm Aerosol Spectrometer 1.109 with Nano Sizer 1.321 (Grimm Aerosol Germany). The obtained results indicated that the highest concentrations of traffic-related pollution were measured near 4-way traffic intersections during peak hours in the autumn and winter. The highest average number concentration of ultrafine particles (PN0.1), and mass concentration of fine particles (PM2.5) in fixed-site measurements were obtained in the autumn and amounted to 23.6 ± 9.2×10³ pt/cm³ and 135.1 ± 11.3 µg/m³, respectively. The highest average number concentration of submicrometer particles (PN1) was measured in the winter and amounted to 68 ± 26.8×10³ pt/cm³. The estimated doses of particles deposited in the commuters’ and pedestrians’ lungs within an hour near 4-way TIs in peak hours in the summer amounted to 4.3 ± 3.3×10⁹ pt/h (PN0.1) and 2.9 ± 1.4 µg/h (PM2.5) and 3.9 ± 1.1×10⁹ pt/h (PN0.1) or 2.5 ± 0.4 µg/h (PM2.5), respectively. While estimating the doses inhaled by the inhabitants of premises located near the road one should take into account different fractional penetration of particles from outdoors to indoors. Such doses assessed for the autumn and winter are up to twice as high as the doses inhaled by commuters and pedestrians in the summer. In the winter traffic-related ultrafine particles account for over 70% of all ultrafine particles deposited in the pedestrians’ lungs. The share of traffic-related PM10 particles was estimated at approximately 33.5%. Concluding, the results of the particle concentration measurements along a road in Lublin indicated that the concentration is mainly affected by the traffic intensity and weather conditions. Further detailed research should focus on how the season and the metrological conditions affect concentration levels of traffic-related pollutants and the exposure of commuters and pedestrians as well as the inhabitants of houses located near traffic routes.Keywords: air quality, deposition dose, health effects, vehicle emissions
Procedia PDF Downloads 95187 Green Supply Chain Network Optimization with Internet of Things
Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen
Abstract:
Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling
Procedia PDF Downloads 329186 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 120185 Development and Evaluation of New Complementary Food from Maize, Soya Bean and Moringa for Young Children
Authors: Berhan Fikru
Abstract:
The objective of this study was to develop new complementary food from maize, soybean and moringa for young children. The complementary foods were formulated with linear programming (LP Nutri-survey software) and Faffa (corn soya blend) use as control. Analysis were made for formulated blends and compared with the control and recommended daily intake (RDI). Three complementary foods composed of maize, soya bean, moringa and sugar with ratio of 65:20:15:0, 55:25:15:5 and 65:20:10:5 for blend 1, 2 and 3, respectively. The blends were formulated based on the protein, energy, mineral (iron, zinc an calcium) and vitamin (vitamin A and C) content of foods. The overall results indicated that nutrient content of faffa (control) was 16.32 % protein, 422.31 kcal energy, 64.47 mg calcium, 3.8 mg iron, 1.87mg zinc, 0.19 mg vitamin A and 1.19 vitamin C; blend 1 had 17.16 % protein, 429.84 kcal energy, 330.40 mg calcium, 6.19 mg iron, 1.62 mg zinc, 6.33 mg vitamin A and 4.05 mg vitamin C; blend 2 had 20.26 % protein, 418.79 kcal energy, 417.44 mg calcium, 9.26 mg iron, 2.16 mg zinc, 8.43 mg vitamin A and 4.19 mg vitamin C whereas blend 3 exhibited 16.44 % protein, 417.42 kcal energy, 242.4 mg calcium, 7.09 mg iron, 2.22 mg zinc, 3.69 mg vitamin A and 4.72 mg vitamin C, respectively. The difference was found between all means statically significance (P < 0.05). Sensory evaluation showed that the faffa control and blend 3 were preferred by semi-trained panelists. Blend 3 had better in terms of its mineral and vitamin content than FAFFA corn soya blend and comparable with WFP proprietary products CSB+, CSB++ and fulfills the WHO recommendation for protein, energy and calcium. The suggested formulation with Moringa powder can therefore be used as a complementary food to improve the nutritional status and also help solve problems associated with protein energy and micronutrient malnutrition for young children in developing countries, particularly in Ethiopia.Keywords: corn soya blend, proximate composition, micronutrient, mineral chelating agents, complementary foods
Procedia PDF Downloads 298184 ADP Approach to Evaluate the Blood Supply Network of Ontario
Authors: Usama Abdulwahab, Mohammed Wahab
Abstract:
This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem
Procedia PDF Downloads 508183 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Authors: Nikolay P. Brayanov, Anna V. Stoynova
Abstract:
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development
Procedia PDF Downloads 244182 Carbon Based Classification of Aquaporin Proteins: A New Proposal
Authors: Parul Johri, Mala Trivedi
Abstract:
Major Intrinsic proteins (MIPs), actively involved in the passive transport of small polar molecules across the membranes of almost all living organisms. MIPs that specifically transport water molecules are named aquaporins (AQPs). The permeability of membranes is actively controlled by the regulation of the amount of different MIPs present but also in some cases by phosphorylation and dephosphorylation of the channel. Based on sequence similarity, MIPs have been classified into many categories. All of the proteins are made up of the 20 amino acids, the only difference is there in their orientations. Again all the 20 amino acids are made up of the basic five elements namely: carbon, hydrogen, oxygen, sulphur and nitrogen. These elements are responsible for giving the amino acids the properties of hydrophilicity/hydrophobicity which play an important role in protein interactions. The hydrophobic amino acids characteristically have greater number of carbon atoms as carbon is the main element which contributes to hydrophobic interactions in proteins. It is observed that the carbon level of proteins in different species is different. In the present work, we have taken a sample set of 150 aquaporins proteins from Uniprot database and a dynamic programming code was written to calculate the carbon percentage for each sequence. This carbon percentage was further used to barcode the aqauporins of animals and plants. The protein taken from Oryza sativa, Zea mays and Arabidopsis thaliana preferred to have carbon percentage of 31.8 to 35, whereas on the other hand sequences taken from Mus musculus, Saccharomyces cerevisiae, Homo sapiens, Bos Taurus, and Rattus norvegicus preferred to have carbon percentage of 31 to 33.7. This clearly demarks the carbon range in the aquaporin proteins from plant and animal origin. Hence the atom level analysis of protein sequences can provide us with better results as compared to the residue level comparison.Keywords: aquaporins, carbon, dynamic prgramming, MIPs
Procedia PDF Downloads 371181 TessPy – Spatial Tessellation Made Easy
Authors: Jonas Hamann, Siavash Saki, Tobias Hagen
Abstract:
Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies
Procedia PDF Downloads 128180 Solving a Micromouse Maze Using an Ant-Inspired Algorithm
Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira
Abstract:
This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking
Procedia PDF Downloads 126179 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming
Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute
Abstract:
Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon
Procedia PDF Downloads 230178 Hiveopolis - Honey Harvester System
Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios
Abstract:
Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.Keywords: honey harvesting, honeybee, hiveopolis, nitinol
Procedia PDF Downloads 109177 Disability Representation in Children’s Programs: A Critical Analysis of Nickelodeon’s Avatar
Authors: Jasmin Glock
Abstract:
Media plays a significant role in terms of shaping and influencing people’s perception of various themes, including disability. Although recent examples indicate progressive attitudes in society, programs across genres continue to portray disability in a negative and stereotypical way. Such a one-sided or stereotypical portrayal of disabled people can further reinforce their marginalized position by turning them into the other. The common trope of the blind or visually impaired woman, for example, marks the character as particularly vulnerable. These stereotypes are easily absorbed and left unquestioned, especially by younger audiences. As a result, the presentation of disability as problematic or painful can instill a subconscious fear of disability in viewers at a very young age. Now the question arises, how can disability be portrayed to children in a more positive way? This paper focuses on the portrayal of physical disability in children’s programming. Using disabled characters from Nickelodeon’s Avatar: The Last Airbender and Avatar: The Legend of Korra, the paper will show that the chosen animated characters have the potential to challenge and subvert disability-based bias and to contribute to the normalization of disability on screen. Analyzing blind protagonist Toph Beifong, recurring support character and wheelchair user Teo, and villain Ming Hua who has prosthetic limbs, this paper aims at highlighting that these disabled characters are far more than mere stereotyped tokens. Instead, they are crucial to the outcome of the story. They are strong and confident while still being allowed to express their insecurities in certain situations. The paper also focuses on how these characters can make disability issues relatable to disabled and non-disabled young audiences alike and how they can thereby contribute to the reduction of prejudice. Finally, they will serve as an example of what inclusive, nuanced, and even empowering disability representation in animated television series can look like.Keywords: Children, disability, representation, television
Procedia PDF Downloads 209176 The Sub-Optimality of the Electricity Subsidy on Tube Wells in Balochistan (Pakistan): An Analysis Based on Socio-Cultural and Policy Distortions
Authors: Rameesha Javaid
Abstract:
Agriculture is the backbone of the economy of the province of Balochistan which is known as the ‘fruit basket’ of Pakistan. Its climate zones comprising highlands and plateaus, dependent on rain water, are more suited for the production of deciduous fruit. The vagaries of weather and more so the persistent droughts prompted the government to announce flat rates of electricity bills per month irrespective of the size of the farm, quantum or water used and the category of crop group. That has, no doubt, resulted in increased cropping intensity, more production and employment but has enormously burdened the official exchequer which picks up the residual bills in certain percentages amongst the federal and provincial governments and the local electricity company. This study tests the desirability of continuing the subsidy in the present mode. Optimization of social welfare of farmers has been the focus of the study with emphasis on the contribution of positive externalities and distortions caused in terms of negative externalities. By using the optimization technique with due allowance for distortions, it has been established that the subsidy calls for limiting policy distortions as they cause sub-optimal utilization of the tube well subsidy and improved policy programming. The sensitivity analysis with changed rankings of contributing variables towards social welfare does not significantly change the result. Therefore it leads to the net findings and policy recommendations of significantly reducing the subsidy size, correcting and curtailing policy distortions and targeting the subsidy grant more towards small farmers to generate more welfare by saving a sizeable amount from the subsidy for investment in the wellbeing of the farmers in rural Balochistan.Keywords: distortion, policy distortion, socio-cultural distortion, social welfare, subsidy
Procedia PDF Downloads 292175 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis
Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos
Abstract:
Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis
Procedia PDF Downloads 141174 The Suitability of Agile Practices in Healthcare Industry with Regard to Healthcare Regulations
Authors: Mahmood Alsaadi, Alexei Lisitsa
Abstract:
Nowadays, medical devices rely completely on software whether as whole software or as embedded software, therefore, the organization that develops medical device software can benefit from adopting agile practices. Using agile practices in healthcare software development industries would bring benefits such as producing a product of a high-quality with low cost and in short period. However, medical device software development companies faced challenges in adopting agile practices. These due to the gaps that exist between agile practices and the requirements of healthcare regulations such as documentation, traceability, and formality. This research paper will conduct a study to investigate the adoption rate of agile practice in medical device software development, and they will extract and outline the requirements of healthcare regulations such as Food and Drug Administration (FDA), Health Insurance Portability and Accountability Act (HIPAA), and Medical Device Directive (MDD) that affect directly or indirectly on software development life cycle. Moreover, this research paper will evaluate the suitability of using agile practices in healthcare industries by analyzing the most popular agile practices such as eXtream Programming (XP), Scrum, and Feature-Driven Development (FDD) from healthcare industry point of view and in comparison with the requirements of healthcare regulations. Finally, the authors propose an agile mixture model that consists of different practices from different agile methods. As result, the adoption rate of agile practices in healthcare industries still low and agile practices should enhance with regard to requirements of the healthcare regulations in order to be used in healthcare software development organizations. Therefore, the proposed agile mixture model may assist in minimizing the gaps existing between healthcare regulations and agile practices and increase the adoption rate in the healthcare industry. As this research paper part of the ongoing project, an evaluation of agile mixture model will be conducted in the near future.Keywords: adoption of agile, agile gaps, agile mixture model, agile practices, healthcare regulations
Procedia PDF Downloads 236173 The Next Generation’s Learning Ability, Memory, as Well as Cognitive Skills Is under the Influence of Paternal Physical Activity (An Intergenerational and Trans-Generational Effect): A Systematic Review and Meta-Analysis
Authors: Parvin Goli, Amirhosein Kefayat, Rezvan Goli
Abstract:
Background: It is well established that parents can influence their offspring's neurodevelopment. It is shown that paternal environment and lifestyle is beneficial for the progeny's fitness and might affect their metabolic mechanisms; however, the effects of paternal exercise on the brain in the offspring have not been explored in detail. Objective: This study aims to review the impact of paternal physical exercise on memory and learning, neuroplasticity, as well as DNA methylation levels in the off-spring's hippocampus. Study design: In this systematic review and meta-analysis, an electronic literature search was conducted in databases including PubMed, Scopus, and Web of Science. Eligible studies were those with an experimental design, including an exercise intervention arm, with the assessment of any type of memory function, learning ability, or any type of brain plasticity as the outcome measures. Standardized mean difference (SMD) and 95% confidence intervals (CI) were computed as effect size. Results: The systematic review revealed the important role of environmental enrichment in the behavioral development of the next generation. Also, offspring of exercised fathers displayed higher levels of memory ability and lower level of brain-derived neurotrophic factor. A significant effect of paternal exercise on the hippocampal volume was also reported in the few available studies. Conclusion: These results suggest an intergenerational effect of paternal physical activity on cognitive benefit, which may be associated with hippocampal epigenetic programming in offspring. However, the biological mechanisms of this modulation remain to be determined.Keywords: hippocampal plasticity, learning ability, memory, parental exercise
Procedia PDF Downloads 211172 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing
Procedia PDF Downloads 94171 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique
Authors: Harpal Singh, Sakshi Batra
Abstract:
The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.Keywords: discrete wavelet transform, robustness, video watermarking, watermark
Procedia PDF Downloads 225