Search results for: software defined networking (SDN)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7673

Search results for: software defined networking (SDN)

5993 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: biodiesel, calibration, chemometrics, methanolysis, multivariate analysis, transesterification, FTIR

Procedia PDF Downloads 139
5992 The Impact of Artificial Intelligence on Digital Crime

Authors: Á. L. Bendes

Abstract:

By the end of the second decade of the 21st century, artificial intelligence (AI) has become an unavoidable part of everyday life and has necessarily aroused the interest of researchers in almost every field of science. This is no different in the case of jurisprudence, whose main task is not only to create its own theoretical paradigm related to AI. Perhaps the biggest impact on digital crime is artificial intelligence. In addition, the need to create legal frameworks suitable for the future application of the law has a similar importance. The prognosis according to which AI can reshape the practical application of law and, ultimately, the entire legal life is also of considerable importance. In the past, criminal law was basically created to sanction the criminal acts of a person, so the application of its concepts with original content to AI-related violations is not expected to be sufficient in the future. Taking this into account, it is necessary to rethink the basic elements of criminal law, such as the act and factuality, but also, in connection with criminality barriers and criminal sanctions, several new aspects have appeared that challenge both the criminal law researcher and the legislator. It is recommended to continuously monitor technological changes in the field of criminal law as well since it will be timely to re-create both the legal and scientific frameworks to correctly assess the events related to them, which may require a criminal law response. Artificial intelligence has completely reformed the world of digital crime. New crimes have appeared, which the legal systems of many countries do not or do not adequately regulate. It is considered important to investigate and sanction these digital crimes. The primary goal is prevention, for which we need a comprehensive picture of the intertwining of artificial intelligence and digital crimes. The goal is to explore these problems, present them, and create comprehensive proposals that support legal certainty.

Keywords: artificial intelligence, chat forums, defamation, international criminal cooperation, social networking, virtual sites

Procedia PDF Downloads 70
5991 Geospatial Technologies in Support of Civic Engagement and Cultural Heritage: Lessons Learned from Three Participatory Planning Workshops for Involving Local Communities in the Development of Sustainable Tourism Practices in Latiano, Brindisi

Authors: Mark Opmeer

Abstract:

The fruitful relationship between cultural heritage and digital technology is evident. Due to the development of user-friendly software, an increasing amount of heritage scholars use ict for their research activities. As a result, the implementation of information technology for heritage planning has become a research objective in itself. During the last decades, we have witnessed a growing debate and literature about the importance of computer technologies for the field of cultural heritage and ecotourism. Indeed, implementing digital technology in support of these domains can be very fruitful for one’s research practice. However, due to the rapid development of new software scholars may find it challenging to use these innovations in an appropriate way. As such, this contribution seeks to explore the interplay between geospatial technologies (geo-ict), civic engagement and cultural heritage and tourism. In this article, we discuss our findings on the use of geo-ict in support of civic participation, cultural heritage and sustainable tourism development in the southern Italian district of Brindisi. In the city of Latiano, three workshops were organized that involved local members of the community to distinguish and discuss interesting points of interests (POI’s) which represent the cultural significance and identity of the area. During the first workshop, a so called mappa della comunità was created on a touch table with collaborative mapping software, that allowed the participators to highlight potential destinations for tourist purposes. Furthermore, two heritage-based itineraries along a selection of identified POI’s was created to make the region attractive for recreants and tourists. These heritage-based itineraries reflect the communities’ ideas about the cultural identity of the region. Both trails were subsequently implemented in a dedicated mobile application (app) and was evaluated using a mixed-method approach with the members of the community during the second workshop. In the final workshop, the findings of the collaboration, the heritage trails and the app was evaluated with all participants. Based on our conclusions, we argue that geospatial technologies have a significant potential for involving local communities in heritage planning and tourism development. The participants of the workshops found it increasingly engaging to share their ideas and knowledge using the digital map of the touch table. Secondly, the use of a mobile application as instrument to test the heritage-based itineraries in the field was broadly considered as fun and beneficial for enhancing community awareness and participation in local heritage. The app furthermore stimulated the communities’ awareness of the added value of geospatial technologies for sustainable tourism development in the area. We conclude this article with a number of recommendations in order to provide a best practice for organizing heritage workshops with similar objectives.

Keywords: civic engagement, geospatial technologies, tourism development, cultural heritage

Procedia PDF Downloads 279
5990 Modelling of Damage as Hinges in Segmented Tunnels

Authors: Gelacio JuáRez-Luna, Daniel Enrique GonzáLez-RamíRez, Enrique Tenorio-Montero

Abstract:

Frame elements coupled with springs elements are used for modelling the development of hinges in segmented tunnels, the spring elements modelled the rotational, transversal and axial failure. These spring elements are equipped with constitutive models to include independently the moment, shear force and axial force, respectively. These constitutive models are formulated based on damage mechanics and experimental test reported in the literature review. The mesh of the segmented tunnels was discretized in the software GID, and the nonlinear analyses were carried out in the finite element software ANSYS. These analyses provide the capacity curve of the primary and secondary lining of a segmented tunnel. Two numerical examples of segmented tunnels show the capability of the spring elements to release energy by the development of hinges. The first example is a segmental concrete lining discretized with frame elements loaded until hinges occurred in the lining. The second example is a tunnel with primary and secondary lining, discretized with a double ring frame model. The outer ring simulates the segmental concrete lining and the inner ring simulates the secondary cast-in-place concrete lining. Spring elements also modelled the joints between the segments in the circumferential direction and the ring joints, which connect parallel adjacent rings. The computed load vs displacement curves are congruent with numerical and experimental results reported in the literature review. It is shown that the modelling of a tunnel with primary and secondary lining with frame elements and springs provides reasonable results and save computational cost, comparing with 2D or 3D models equipped with smeared crack models.

Keywords: damage, hinges, lining, tunnel

Procedia PDF Downloads 379
5989 Study on the Integration Schemes and Performance Comparisons of Different Integrated Solar Combined Cycle-Direct Steam Generation Systems

Authors: Liqiang Duan, Ma Jingkai, Lv Zhipeng, Haifan Cai

Abstract:

The integrated solar combined cycle (ISCC) system has a series of advantages such as increasing the system power generation, reducing the cost of solar power generation, less pollutant and CO2 emission. In this paper, the parabolic trough collectors with direct steam generation (DSG) technology are considered to replace the heat load of heating surfaces in heat regenerator steam generation (HRSG) of a conventional natural gas combined cycle (NGCC) system containing a PG9351FA gas turbine and a triple pressure HRSG with reheat. The detailed model of the NGCC system is built in ASPEN PLUS software and the parabolic trough collectors with DSG technology is modeled in EBSILON software. ISCC-DSG systems with the replacement of single, two, three and four heating surfaces are studied in this paper. Results show that: (1) the ISCC-DSG systems with the replacement heat load of HPB, HPB+LPE, HPE2+HPB+HPS, HPE1+HPE2+ HPB+HPS are the best integration schemes when single, two, three and four stages of heating surfaces are partly replaced by the parabolic trough solar energy collectors with DSG technology. (2) Both the changes of feed water flow and the heat load of the heating surfaces in ISCC-DSG systems with the replacement of multi-stage heating surfaces are smaller than those in ISCC-DSG systems with the replacement of single heating surface. (3) ISCC-DSG systems with the replacement of HPB+LPE heating surfaces can increase the solar power output significantly. (4) The ISCC-DSG systems with the replacement of HPB heating surfaces has the highest solar-thermal-to-electricity efficiency (47.45%) and the solar radiation energy-to-electricity efficiency (30.37%), as well as the highest exergy efficiency of solar field (33.61%).

Keywords: HRSG, integration scheme, parabolic trough collectors with DSG technology, solar power generation

Procedia PDF Downloads 247
5988 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach

Authors: Elena Pummer, Holger Schuettrumpf

Abstract:

By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.

Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach

Procedia PDF Downloads 206
5987 Study of Mobile Game Addiction Using Electroencephalography Data Analysis

Authors: Arsalan Ansari, Muhammad Dawood Idrees, Maria Hafeez

Abstract:

Use of mobile phones has been increasing considerably over the past decade. Currently, it is one of the main sources of communication and information. Initially, mobile phones were limited to calls and messages, but with the advent of new technology smart phones were being used for many other purposes including video games. Despite of positive outcomes, addiction to video games on mobile phone has become a leading cause of psychological and physiological problems among many people. Several researchers examined the different aspects of behavior addiction with the use of different scales. Objective of this study is to examine any distinction between mobile game addicted and non-addicted players with the use of electroencephalography (EEG), based upon psycho-physiological indicators. The mobile players were asked to play a mobile game and EEG signals were recorded by BIOPAC equipment with AcqKnowledge as data acquisition software. Electrodes were places, following the 10-20 system. EEG was recorded at sampling rate of 200 samples/sec (12,000samples/min). EEG recordings were obtained from the frontal (Fp1, Fp2), parietal (P3, P4), and occipital (O1, O2) lobes of the brain. The frontal lobe is associated with behavioral control, personality, and emotions. The parietal lobe is involved in perception, understanding logic, and arithmetic. The occipital lobe plays a role in visual tasks. For this study, a 60 second time window was chosen for analysis. Preliminary analysis of the signals was carried out with Acqknowledge software of BIOPAC Systems. From the survey based on CGS manual study 2010, it was concluded that five participants out of fifteen were in addictive category. This was used as prior information to group the addicted and non-addicted by physiological analysis. Statistical analysis showed that by applying clustering analysis technique authors were able to categorize the addicted and non-addicted players specifically on theta frequency range of occipital area.

Keywords: mobile game, addiction, psycho-physiology, EEG analysis

Procedia PDF Downloads 155
5986 Slope Stability Analysis and Evaluation of Road Cut Slope in Case of Goro to Abagada Road, Adama

Authors: Ezedin Geta Seid

Abstract:

Slope failures are among the common geo-environmental natural hazards in the hilly and mountainous terrain of the world causing damages to human life and destruction of infrastructures. In Ethiopia, the demand for the construction of infrastructures, especially highways and railways, has increased to connect the developmental centers. However, the failure of roadside slopes formed due to the difficulty of geographical locations is the major difficulty for this development. As a result, a comprehensive site-specific investigation of destabilizing agents and a suitable selection of slope profiles are needed during design. Hence, this study emphasized the stability analysis and performance evaluation of slope profiles (single slope, multi-slope, and benched slope). The analysis was conducted for static and dynamic loading conditions using limit equilibrium (slide software) and finite element method (Praxis software). The analysis results in selected critical sections show that the slope is marginally stable, with FS varying from 1.2 to 1.5 in static conditions, and unstable with FS below 1 in dynamic conditions. From the comparison of analysis methods, the finite element method provides more valuable information about the failure surface of a slope than limit equilibrium analysis. Performance evaluation of geometric profiles shows that geometric modification provides better and more economical slope stability. Benching provides significant stability for cut slopes (i.e., the use of 2m and 3m bench improves the factor of safety by 7.5% and 12% from a single slope profile). The method is more effective on steep slopes. Similarly, the use of a multi-slope profile improves the stability of the slope in stratified soil with varied strength. The performance is more significant when it is used in combination with benches. The study also recommends drainage control and slope reinforcement as a remedial measure for cut slopes.

Keywords: slope failure, slope profile, bench slope, multi slope

Procedia PDF Downloads 6
5985 Fundamentals of Mobile Application Architecture

Authors: Mounir Filali

Abstract:

Companies use many innovative ways to reach their customers to stay ahead of the competition. Along with the growing demand for innovative business solutions is the demand for new technology. The most noticeable area of demand for business innovations is the mobile application industry. Recently, companies have recognized the growing need to integrate proprietary mobile applications into their suite of services; Companies have realized that developing mobile apps gives them a competitive edge. As a result, many have begun to rapidly develop mobile apps to stay ahead of the competition. Mobile application development helps companies meet the needs of their customers. Mobile apps also help businesses to take advantage of every potential opportunity to generate leads that convert into sales. Mobile app download growth statistics with the recent rise in demand for business-related mobile apps, there has been a similar rise in the range of mobile app solutions being offered. Today, companies can use the traditional route of the software development team to build their own mobile applications. However, there are also many platform-ready "low-code and no-code" mobile apps available to choose from. These mobile app development options have more streamlined business processes. This helps them be more responsive to their customers without having to be coding experts. Companies must have a basic understanding of mobile app architecture to attract and maintain the interest of mobile app users. Mobile application architecture refers to the buildings or structural systems and design elements that make up a mobile application. It also includes the technologies, processes, and components used during application development. The underlying foundation of all applications consists of all elements of the mobile application architecture; developing a good mobile app architecture requires proper planning and strategic design. The technology framework or platform on the back end and user-facing side of a mobile application is part of the mobile architecture of the application. In-application development Software programmers loosely refer to this set of mobile architecture systems and processes as the "technology stack."

Keywords: mobile applications, development, architecture, technology

Procedia PDF Downloads 98
5984 Improving the Quality of Transport Management Services with Fuzzy Signatures

Authors: Csaba I. Hencz, István Á. Harmati

Abstract:

Nowadays the significance of road transport is gradually increasing. All transport companies are working in the same external environment where the speed of transport is defined by traffic rules. The main objective is to accelerate the speed of service and it is only dependent on the individual abilities of the managing members. These operational control units make decisions quickly (in a typically experiential and/or intuitive way). For this reason, support for these decisions is an important task. Our goal is to create a decision support model based on fuzzy signatures that can assist the work of operational management automatically. If the model sets parameters properly, the management of transport could be more economical and efficient.

Keywords: freight transport, decision support, information handling, fuzzy methods

Procedia PDF Downloads 250
5983 A Method for Quantitative Assessment of the Dependencies between Input Signals and Output Indicators in Production Systems

Authors: Maciej Zaręba, Sławomir Lasota

Abstract:

Knowing the degree of dependencies between the sets of input signals and selected sets of indicators that measure a production system's effectiveness is of great importance in the industry. This paper introduces the SELM method that enables the selection of sets of input signals, which affects the most the selected subset of indicators that measures the effectiveness of a production system. For defined set of output indicators, the method quantifies the impact of input signals that are gathered in the continuous monitoring production system.

Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems

Procedia PDF Downloads 108
5982 Cross-Layer Design of Event-Triggered Adaptive OFDMA Resource Allocation Protocols with Application to Vehicle Clusters

Authors: Shaban Guma, Naim Bajcinca

Abstract:

We propose an event-triggered algorithm for the solution of a distributed optimization problem by means of the projected subgradient method. Thereby, we invoke an OFDMA resource allocation scheme by applying an event-triggered sensitivity analysis at the access point. The optimal resource assignment of the subcarriers to the involved wireless nodes is carried out by considering the sensitivity analysis of the overall objective function as defined by the control of vehicle clusters with respect to the information exchange between the nodes.

Keywords: consensus, cross-layer, distributed, event-triggered, multi-vehicle, protocol, resource, OFDMA, wireless

Procedia PDF Downloads 322
5981 Methodology to Affirm Driver Engagement in Dynamic Driving Task (DDT) for a Level 2 Adas Feature

Authors: Praneeth Puvvula

Abstract:

Autonomy in has become increasingly common in modern automotive cars. There are 5 levels of autonomy as defined by SAE. This paper focuses on a SAE level 2 feature which, by definition, is able to control the vehicle longitudinally and laterally at the same time. The system keeps the vehicle centred with in the lane by detecting the lane boundaries while maintaining the vehicle speed. As with the features from SAE level 1 to level 3, the primary responsibility of dynamic driving task lies with the driver. This will need monitoring techniques to ensure the driver is always engaged even while the feature is active. This paper focuses on the these techniques, which would help the safe usage of the feature and provide appropriate warnings to the driver.

Keywords: autonomous driving, safety, adas, automotive technology

Procedia PDF Downloads 77
5980 An Overview of Technology Availability to Support Remote Decentralized Clinical Trials

Authors: Simone Huber, Bianca Schnalzer, Baptiste Alcalde, Sten Hanke, Lampros Mpaltadoros, Thanos G. Stavropoulos, Spiros Nikolopoulos, Ioannis Kompatsiaris, Lina Pérez- Breva, Vallivana Rodrigo-Casares, Jaime Fons-Martínez, Jeroen de Bruin

Abstract:

Developing new medicine and health solutions and improving patient health currently rely on the successful execution of clinical trials, which generate relevant safety and efficacy data. For their success, recruitment and retention of participants are some of the most challenging aspects of protocol adherence. Main barriers include: i) lack of awareness of clinical trials; ii) long distance from the clinical site; iii) the burden on participants, including the duration and number of clinical visits and iv) high dropout rate. Most of these aspects could be addressed with a new paradigm, namely the Remote Decentralized Clinical Trials (RDCTs). Furthermore, the COVID-19 pandemic has highlighted additional advantages and challenges for RDCTs in practice, allowing participants to join trials from home and not depend on site visits, etc. Nevertheless, RDCTs should follow the process and the quality assurance of conventional clinical trials, which involve several processes. For each part of the trial, the Building Blocks, existing software and technologies were assessed through a systematic search. The technology needed to perform RDCTs is widely available and validated but is yet segmented and developed in silos, as different software solutions address different parts of the trial and at various levels. The current paper is analyzing the availability of technology to perform RDCTs, identifying gaps and providing an overview of Basic Building Blocks and functionalities that need to be covered to support the described processes.

Keywords: architectures and frameworks for health informatics systems, clinical trials, information and communications technology, remote decentralized clinical trials, technology availability

Procedia PDF Downloads 208
5979 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 146
5978 Scope of Heavy Oil as a Fuel of the Future

Authors: Kiran P. Chadayamuri, Saransh Bagdi

Abstract:

Increasing imbalance between energy supply and demand has made nations and companies involved in the energy sector to boost up their research and find suitable solutions. With the high rates at which conventional oil and gas resources are depleting, efficient exploration and exploitation of heavy oil could just be the answer. Heavy oil may be defined as crude oil having API gravity value of less than 20⁰. They are highly viscous, have low hydrogen to carbon ratios and are known to produce high carbon residues. They have high contents of asphaltenes, heavy metals, sulphur and nitrogen in them. Due to these properties extraction, transportation and refining of crude oil have its share of challenges. Lack of suitable technology has hindered its production in the past, but now things are going in a more positive direction. The aim of this paper is to study the various advantages of heavy oil, associated limitations and its feasibility as a fuel of the future.

Keywords: energy, heavy oil, fuel, future

Procedia PDF Downloads 278
5977 Parallel 2-Opt Local Search on GPU

Authors: Wen-Bao Qiao, Jean-Charles Créput

Abstract:

To accelerate the solution for large scale traveling salesman problems (TSP), a parallel 2-opt local search algorithm with simple implementation based on Graphics Processing Unit (GPU) is presented and tested in this paper. The parallel scheme is based on technique of data decomposition by dynamically assigning multiple K processors on the integral tour to treat K edges’ 2-opt local optimization simultaneously on independent sub-tours, where K can be user-defined or have a function relationship with input size N. We implement this algorithm with doubly linked list on GPU. The implementation only requires O(N) memory. We compare this parallel 2-opt local optimization against sequential exhaustive 2-opt search along integral tour on TSP instances from TSPLIB with more than 10000 cities.

Keywords: parallel 2-opt, double links, large scale TSP, GPU

Procedia PDF Downloads 612
5976 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT

Procedia PDF Downloads 263
5975 Care as a Situated Universal: Defining Care as a Practical Phenomenology Study

Authors: Amanda Aliende da Matta

Abstract:

This communication presents an aspect of phenomenon selection in an applied hermeneutic phenomenology study on care and vulnerability: the need to consider it as a situated universal. For that, we will first present the study and its methodology. Secondly, we will expose the need to understand phenomena as situation-defined, incorporating feminist thought. In an informatics class for 14 year olds, we explained the exercise: students have to make a 5 slide presentation about a topic of their choice. A does it on streetwear, B on Cristiano Ronaldo, C on Marvel, but J did it on Down Syndrome. Introducing it to the class, J explains the physical and cognitive differences caused by trisomy; when asked to explain it further, he says: "they are angels, teacher," and shows us a poster on his cellphone that says: if you laugh at a different child he will laugh with you because his innocence outweighs your ignorance. The anecdote shows, better than any theoretical explanation, something that some vulnerable people have; something beautiful and special but difficult to define. Let's call this something caring. The research has the main objective of accounting for the experience of caregiving in vulnerability, and it will be carried out with Applied Hermeneutic Phenomenology (AHP). The method's objective is to investigate the lived human experience in its pre-reflexive dimension to know its meaning structures. Contrary to other research methods, AHP does not produce theory about a specific context but seeks the meaning of the lived experience, in its characteristic of human experience. However, it is necessary that we understand care as defined in a concrete situation. We cannot start the research with an a priori definitive concept of care, or we would fall into the mistake of closing ourselves to only what we already know, as explained by Levinas. We incorporate, then, the notion of situated universals. Loyal to phenomenology, the definition of the phenomenon should start with an investigation of the word's etymology: the word cura, in its etymological root, means care. And care comes from the Latin word cogitātus/cōgĭto, which means "to pursue something in mind" and "to consider thoroughly." The verb cōgĭto, meanwhile, is composed of co- (altogether) and agitare (to deal with or think committedly about something, to concern oneself with) / ăgĭto (to set in motion, to move). Care, therefore, has in its origin a meditation on something, a concern about something, a verb that has a sense of action and movement. To care is to act out of concern for something/someone. This etymology, though, is not the final definition of the phenomenon, but only its skeleton. It needs to be embodied in the concrete situation to become a possible lived experience. And that means that the lived experience descriptions (LEDs) should be selected by taking into consideration how and if care was engendered in that concrete experience. Defining the phenomenon has to take into consideration situated knowledge.

Keywords: applied hermeneutic phenomenology, care ethics, hermeneutics, phenomenology, situated universalism

Procedia PDF Downloads 80
5974 MARISTEM: A COST Action Focused on Stem Cells of Aquatic Invertebrates

Authors: Arzu Karahan, Loriano Ballarin, Baruch Rinkevich

Abstract:

Marine invertebrates, the highly diverse phyla of multicellular organisms, represent phenomena that are either not found or highly restricted in the vertebrates. These include phenomena like budding, fission, a fusion of ramets, and high regeneration power, such as the ability to create whole new organisms from either tiny parental fragment, many of which are controlled by totipotent, pluripotent, and multipotent stem cells. Thus, there is very much that can be learned from these organisms on the practical and evolutionary levels, further resembling Darwin's words, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change”. The ‘stem cell’ notion highlights a cell that has the ability to continuously divide and differentiate into various progenitors and daughter cells. In vertebrates, adult stem cells are rare cells defined as lineage-restricted (multipotent at best) with tissue or organ-specific activities that are located in defined niches and further regulate the machinery of homeostasis, repair, and regeneration. They are usually categorized by their morphology, tissue of origin, plasticity, and potency. The above description not always holds when comparing the vertebrates with marine invertebrates’ stem cells that display wider ranges of plasticity and diversity at the taxonomic and the cellular levels. While marine/aquatic invertebrates stem cells (MISC) have recently raised more scientific interest, the know-how is still behind the attraction they deserve. MISC, not only are highly potent but, in many cases, are abundant (e.g., 1/3 of the entire animal cells), do not locate in permanent niches, participates in delayed-aging and whole-body regeneration phenomena, the knowledge of which can be clinically relevant. Moreover, they have massive hidden potential for the discovery of new bioactive molecules that can be used for human health (antitumor, antimicrobial) and biotechnology. The MARISTEM COST action (Stem Cells of Marine/Aquatic Invertebrates: From Basic Research to Innovative Applications) aims to connect the European fragmented MISC community. Under this scientific umbrella, the action conceptualizes the idea for adult stem cells that do not share many properties with the vertebrates’ stem cells, organizes meetings, summer schools, and workshops, stimulating young researchers, supplying technical and adviser support via short-term scientific studies, making new bridges between the MISC community and biomedical disciplines.

Keywords: aquatic/marine invertebrates, adult stem cell, regeneration, cell cultures, bioactive molecules

Procedia PDF Downloads 160
5973 Relationship between Legacy of Islamic Hadith and Biodiversity

Authors: Mohsen Nouraei, Maryam Amouei

Abstract:

Islamic studies are considered in both the Quran and Hadith. Hadith is defined as a set of reports that narrated the words and behaviors of infallible persons such as the holy Prophet (pbuh) or the Infallible Imams (as). The issue of biodiversity which is the one of the most important environmental aspects is considered in the field of Hadith. The present paper has investigated biodiversity on the basis of descriptive-analytical methods and with the approach of library-documentary. The household of the Prophet (as) have referred biodiversity that were included diversity of animals, plants, climate etc. In addition, they also have emphasized on the human need to keep diversity and no damage. It should be noted that they have expressed the rights of the animals and plants for correct using of human, so that human can use these rights in conservation of diversity and their generation.

Keywords: biodiversity, conservation of biodiversity, degradation of biodiversity, extinction of biodiversity

Procedia PDF Downloads 440
5972 Determining the Information Technologies Usage and Learning Preferences of Construction

Authors: Naci Büyükkaracığan, Yıldırım Akyol

Abstract:

Information technology is called the technology which provides transmission of information elsewhere regardless of time, location, distance. Today, information technology is providing the occurrence of ground breaking changes in all areas of our daily lives. Information can be reached quickly to millions of people with help of information technology. In this Study, effects of information technology on students for educations and their learning preferences were demonstrated with using data obtained from questionnaires administered to students of 2015-2016 academic year at Selcuk University Kadınhanı Faik İçil Vocational School Construction Department. The data was obtained by questionnaire consisting of 30 questions that was prepared by the researchers. SPSS 21.00 package programme was used for statistical analysis of data. Chi-square tests, Mann-Whitney U test, Kruskal-Wallis and Kolmogorov-Smirnov tests were used in the data analysis for Descriptiving statistics. In a study conducted with the participation of 61 students, 93.4% of students' reputation of their own information communication device (computer, smart phone, etc.) That have been shown to be at the same rate and to the internet. These are just a computer of itself, then 45.90% of the students. The main reasons for the students' use of the Internet, social networking sites are 85.24%, 13.11% following the news of the site, as seen. All student assignments in information technology, have stated that they use in the preparation of the project. When students acquire scientific knowledge in the profession regarding their preferred sources evaluated were seen exactly when their preferred internet. Male students showed that daily use of information technology while compared to female students was statistically significantly less. Construction Package program where students are eager to learn about the reputation of 72.13% and 91.80% identified in the well which they agreed that an indispensable element in the professional advancement of information technology.

Keywords: information technologies, computer, construction, internet, learning systems

Procedia PDF Downloads 289
5971 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 287
5970 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 119
5969 Design and Implementation of 3kVA Grid-Tied Transformerless Power Inverter for Solar Photovoltaic Application

Authors: Daniel O. Johnson, Abiodun A. Ogunseye, Aaron Aransiola, Majors Samuel

Abstract:

Power Inverter is a very important device in renewable energy use particularly for solar photovoltaic power application because it is the effective interface between the DC power generator and the load or the grid. Transformerless inverter is getting more and more preferred to the power converter with galvanic isolation transformer and may eventually supplant it. Transformerless inverter offers advantages of improved DC to AC conversion and power delivery efficiency; and reduced system cost, weight and complexity. This work presents thorough analysis of the design and prototyping of 3KVA grid-tie transformerless inverter. The inverter employs electronic switching method with minimised heat generation in the system and operates based on the principle of pulse-width modulation (PWM). The design is such that it can take two inputs, one from PV arrays and the other from Battery Energy Storage BES and addresses the safety challenge of leakage current. The inverter system was designed around microcontroller system, modeled with Proteus® software for simulation and testing of the viability of the designed inverter circuit. The firmware governing the operation of the grid-tied inverter is written in C language and was developed using MicroC software by Mikroelectronica® for writing sine wave signal code for synchronization to the grid. The simulation results show that the designed inverter circuit performs excellently with very high efficiency, good quality sinusoidal output waveform, negligible harmonics and gives very stable performance under voltage variation from 36VDC to 60VDC input. The prototype confirmed the simulated results and was successfully synchronized with the utility supply. The comprehensive analyses of the circuit design, the prototype and explanation on overall performance will be presented.

Keywords: grid-tied inverter, leakage current, photovoltaic system, power electronic, transformerless inverter

Procedia PDF Downloads 281
5968 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka

Authors: H. M. N. L. Handagiripathira

Abstract:

The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.

Keywords: gamma spectrometry, lagoon, radioactivity, sediments

Procedia PDF Downloads 130
5967 Effects of Neem (Azadirachta indica A. Juss) Kernel Inclusion in Broiler Diet on Growth Performance, Organ Weight and Gut Morphometry

Authors: Olatundun Bukola Ezekiel, Adejumo Olusoji

Abstract:

A feeding trial was conducted with 100 two-weeks old broiler chicken to evaluate the influence of inclusion in broiler diets at 0, 2.5, 5, 7.5 and 10% neem kernel (used to replace equal quantity of maize) on their performance, organ weight and gut morphometry. The birds were randomly allotted to five dietary treatments, each treatment having four replicates consisting of five broilers in a completely randomized design. The diets were formulated to be iso-nitrogenous (23% CP). Weekly feed intake and changes in body weight were calculated and feed efficiency determined. At the end of the 28-day feeding trial, four broilers per treatment were selected and sacrificed for carcass evaluation. Results were subjected to statistical analysis using the analysis of variance procedures of Statistical Analysis Software The treatment means were presented with group standard errors of means and where significant, were compared using the Duncan multiple range test of the same software. The results showed that broilers fed 2.5% neem kernel inclusion diets had growth performance statistically comparable to those fed the control diet. Birds on 5, 7.5 and 10% neem kernel diets showed significant (P<0.05) increase in relative weight of liver. The absolute weight of spleen also increased significantly (P<0.05) in birds on 10 % neem kernel diet. More than 5 % neem kernel diets gave significant (P<0.05) increase in the relative weight of the kidney. The length of the small intestine significantly increased in birds fed 7.5 and 10% neem kernel diets. Significant differences (P<0.05) did not occur in the length of the large intestine, right and left caeca. It is recommended that neem kernel can be included up to 2.5% in broiler chicken diet without any deleterious effects on the performance and physiological status of the birds.

Keywords: broiler chicken, growth performance, gut morphometry, neem kernel, organ weight

Procedia PDF Downloads 753
5966 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System

Authors: Abdul-Rahman Al-Ali

Abstract:

As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.

Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances

Procedia PDF Downloads 313
5965 Radiographic Evaluation of Odontogenic Keratocyst: A 14 Years Retrospective Study

Authors: Nor Hidayah Reduwan, Jira Chindasombatjaroen, Suchaya Pornprasersuk-Damrongsri, Sopee Pomsawat

Abstract:

INTRODUCTION: Odontogenic keratocyst (OKC) remain as a controversial pathologic entity under the scrutiny of many researchers and maxillofacial surgeons alike. The high recurrence rate and relatively aggressive nature of this lesion demand a meticulous analysis of the radiographic characteristic of OKC leading to the formulation of an accurate diagnosis. OBJECTIVE: This study aims to determine the radiographic characteristic of odontogenic keratocyst (OKC) using conventional radiographs and cone beam computed tomography (CBCT) images. MATERIALS AND METHODS: Patients histopathologically diagnosed as OKC from 2003 to 2016 by Oral and Maxillofacial Pathology Department were retrospectively reviewed. Radiographs of these cases from the archives of the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry Mahidol University were retrieved. Assessment of the location, shape, border, cortication, locularity, the relationship of lesion to embedded tooth, displacement of adjacent tooth, root resorption and bony expansion of the lesion were conducted. RESULTS: Radiographs of 91 patients (44 males, 47 females) with the mean age of 31 years old (10 to 84 years) were analyzed. Among all patients, 5 cases were syndromic patients. Hence, a total of 103 OKCs were studied. The most common location was at the ramus of mandible (32%) followed by posterior maxilla (29%). Most cases presented as a well-defined unilocular radiolucency with smooth and corticated border. The lesion was in associated with embedded tooth in 48 lesions (47%). Eighty five percent of embedded tooth are impacted 3rd molar. Thirty-seven percentage of embedded tooth were entirely encapsulated in the lesion. The lesion attached to the embedded tooth at the cementoenamel junction (CEJ) in 40% and extended to part of root in 23% of cases. Teeth displacement and root resorption were found in 29% and 6% of cases, respectively. Bony expansion in bucco-lingual dimension was seen in 63% of cases. CONCLUSION: OKCs were predominant in the posterior region of the mandible with radiographic features of a well-defined, unilocular radiolucency with smooth and corticated margin. The lesions might relate to an embedded tooth by surrounding an entire tooth, attached to the CEJ level or extending to part of root. Bony expansion could be found but teeth displacement and root resorption were not common. These features might help in giving the differential diagnosis.

Keywords: cone beam computed tomography, imaging dentistry, odontogenic keratocyst, radiographic features

Procedia PDF Downloads 121
5964 Mobile App Architecture in 2023: Build Your Own Mobile App

Authors: Mounir Filali

Abstract:

Companies use many innovative ways to reach their customers to stay ahead of the competition. Along with the growing demand for innovative business solutions is the demand for new technology. The most noticeable area of demand for business innovations is the mobile application industry. Recently, companies have recognized the growing need to integrate proprietary mobile applications into their suite of services; Companies have realized that developing mobile apps gives them a competitive edge. As a result, many have begun to rapidly develop mobile apps to stay ahead of the competition. Mobile application development helps companies meet the needs of their customers. Mobile apps also help businesses to take advantage of every potential opportunity to generate leads that convert into sales. Mobile app download growth statistics with the recent rise in demand for business-related mobile apps, there has been a similar rise in the range of mobile app solutions being offered. Today, companies can use the traditional route of the software development team to build their own mobile applications. However, there are also many platform-ready "low-code and no-code" mobile apps available to choose from. These mobile app development options have more streamlined business processes. This helps them be more responsive to their customers without having to be coding experts. Companies must have a basic understanding of mobile app architecture to attract and maintain the interest of mobile app users. Mobile application architecture refers to the buildings or structural systems and design elements that make up a mobile application. It also includes the technologies, processes, and components used during application development. The underlying foundation of all applications consists of all elements of the mobile application architecture, developing a good mobile app architecture requires proper planning and strategic design. The technology framework or platform on the back end and user-facing side of a mobile application is part of the mobile architecture of the application. In-application development Software programmers loosely refer to this set of mobile architecture systems and processes as the "technology stack".

Keywords: mobile applications, development, architecture, technology

Procedia PDF Downloads 88