Search results for: centralized server
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 476

Search results for: centralized server

56 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 431
55 Evaluating the Potential of a Fast Growing Indian Marine Cyanobacterium by Reconstructing and Analysis of a Genome Scale Metabolic Model

Authors: Ruchi Pathania, Ahmad Ahmad, Shireesh Srivastava

Abstract:

Cyanobacteria is a promising microbe that can capture and convert atmospheric CO₂ and light into valuable industrial bio-products like biofuels, biodegradable plastics, etc. Among their most attractive traits are faster autotrophic growth, whole year cultivation using non-arable land, high photosynthetic activity, much greater biomass and productivity and easy for genetic manipulations. Cyanobacteria store carbon in the form of glycogen which can be hydrolyzed to release glucose and fermented to form bioethanol or other valuable products. Marine cyanobacterial species are especially attractive for countries with scarcity of freshwater. We recently identified a marine native cyanobacterium Synechococcus sp. BDU 130192 which has good growth rate and high level of polyglucans accumulation compared to Synechococcus PCC 7002. In this study, firstly we sequenced the whole genome and the sequences were annotated using the RAST server. Genome scale metabolic model (GSMM) was reconstructed through COBRA toolbox. GSMM is a computational representation of the metabolic reactions and metabolites of the target strain. GSMMs construction through the application of Flux Balance Analysis (FBA), which uses external nutrient uptake rates and estimate steady state intracellular and extracellular reaction fluxes, including maximization of cell growth. The model, which we have named isyn942, includes 942 reactions and 913 metabolites having 831 metabolic, 78 transport and 33 exchange reactions. The phylogenetic tree obtained by BLAST search revealed that the strain was a close relative of Synechococcus PCC 7002. The flux balance analysis (FBA) was applied on the model iSyn942 to predict the theoretical yields (mol product produced/mol CO₂ consumed) for native and non-native products like acetone, butanol, etc. under phototrophic condition by applying metabolic engineering strategies. The reported strain can be a viable strain for biotechnological applications, and the model will be helpful to researchers interested in understanding the metabolism as well as to design metabolic engineering strategies for enhanced production of various bioproducts.

Keywords: cyanobacteria, flux balance analysis, genome scale metabolic model, metabolic engineering

Procedia PDF Downloads 158
54 Securing Communities to Bring Sustainable Development, Building Peace and Community Safety: the Ethiopian Community Policing in Amhara National Regional State of Ethiopia

Authors: Demelash Kassaye

Abstract:

The Ethiopia case study reveals a unique model of community policing that has developed from a particular political context in which there is a history of violent political transition, a political structure characterized by ethnic federalism and a political ideology that straddles liberal capitalism and democracy on the one hand, and state-led development and centralized control on the other. The police see community policing as a way to reduce crime. Communities speak about community policing as an opportunity to take on policing responsibilities themselves. Both of these objectives are brought together in an overarching rhetoric of community policing as a way of ‘mobilizing for development’ – whereby the community cooperate with the police to reduce crime, which otherwise inhibits development progress. Community policing in Amhara has primarily involved the placement of Community Police Officers at the kebele level across the State. In addition, a number of structures have also been established in the community, including Advisory Councils, Conflict Resolving Committees, family police and the use of shoe shiner’s and other trade associations as police informants. In addition to these newly created structures, community policing also draws upon pre-existing customary actors, such as militia and elders. Conflict Resolving Committees, Community Police Officers and elders were reported as the most common first ports of call when community members experience a crime. The analysis highlights that the model of community policing in Amhara increased communities’ access to policing services, although this is not always attended by increased access to justice. Community members also indicate that public perceptions of the police have improved since the introduction of community policing, in part due to individual Community Police Officers who have, with limited resources, innovated some impressive strategies to improve safety in their neighborhoods. However, more broadly, community policing has provided the state with more effective surveillance of the population – a potentially oppressive function in the current political context. Ultimately, community policing in Amhara is anything but straightforward. It has been a process of attempting to demonstrate the benefits of newfound (and controversial) ‘democracy’ following years of dictatorship, drawing on generations of customary dispute resolution, providing both improved access to security for communities and an enhanced surveillance capacity for the state. For external actors looking to engage in community policing, this case study reveals the importance of close analysis in assessing potential merits, risks and entry points of programming. Factors found to be central in shaping the nature of community policing in the Amhara case include the structure of the political system, state-society relations, cultures dispute resolution and political ideology.

Keywords: community policing, community, militias, ethiopia

Procedia PDF Downloads 131
53 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 248
52 Integrating Wearable-Textiles Sensors and IoT for Continuous Electromyography Monitoring

Authors: Bulcha Belay Etana, Benny Malengier, Debelo Oljira, Janarthanan Krishnamoorthy, Lieva Vanlangenhove

Abstract:

Electromyography (EMG) is a technique used to measure the electrical activity of muscles. EMG can be used to assess muscle function in a variety of settings, including clinical, research, and sports medicine. The aim of this study was to develop a wearable textile sensor for EMG monitoring. The sensor was designed to be soft, stretchable, and washable, making it suitable for long-term use. The sensor was fabricated using a conductive thread material that was embroidered onto a fabric substrate. The sensor was then connected to a microcontroller unit (MCU) and a Wi-Fi-enabled module. The MCU was programmed to acquire the EMG signal and transmit it wirelessly to the Wi-Fi-enabled module. The Wi-Fi-enabled module then sent the signal to a server, where it could be accessed by a computer or smartphone. The sensor was able to successfully acquire and transmit EMG signals from a variety of muscles. The signal quality was comparable to that of commercial EMG sensors. The development of this sensor has the potential to improve the way EMG is used in a variety of settings. The sensor is soft, stretchable, and washable, making it suitable for long-term use. This makes it ideal for use in clinical settings, where patients may need to wear the sensor for extended periods of time. The sensor is also small and lightweight, making it ideal for use in sports medicine and research settings. The data for this study was collected from a group of healthy volunteers. The volunteers were asked to perform a series of muscle contractions while the EMG signal was recorded. The data was then analyzed to assess the performance of the sensor. The EMG signals were analyzed using a variety of methods, including time-domain analysis and frequency-domain analysis. The time-domain analysis was used to extract features such as the root mean square (RMS) and average rectified value (ARV). The frequency-domain analysis was used to extract features such as the power spectrum. The question addressed by this study was whether a wearable textile sensor could be developed that is soft, stretchable, and washable and that can successfully acquire and transmit EMG signals. The results of this study demonstrate that a wearable textile sensor can be developed that meets the requirements of being soft, stretchable, washable, and capable of acquiring and transmitting EMG signals. This sensor has the potential to improve the way EMG is used in a variety of settings.

Keywords: EMG, electrode position, smart wearable, textile sensor, IoT, IoT-integrated textile sensor

Procedia PDF Downloads 75
51 Beyond the Flipped Classroom: A Tool to Promote Autonomy, Cooperation, Differentiation and the Pleasure of Learning

Authors: Gabriel Michel

Abstract:

The aim of our research is to find solutions for adapting university teaching to today's students and companies. To achieve this, we have tried to change the posture and behavior of those involved in the learning situation by promoting other skills. There is a gap between the expectations and functioning of students and university teaching. At the same time, the business world needs employees who are obviously competent and proficient in technology, but who are also imaginative, flexible, able to communicate, learn on their own and work in groups. These skills are rarely developed as a goal at university. The flipped classroom has been one solution. Thanks to digital tools such as Moodle, for example, but the model behind them is still centered on teachers and classic learning scenarios: it makes course materials available without really involving them and encouraging them to cooperate. It's against this backdrop that we've conducted action research to explore the possibility of changing the way we learn (rather than teach) by changing the posture of both the classic student and the teacher. We hypothesized that a tool we developed would encourage autonomy, the possibility of progressing at one's own pace, collaboration and learning using all available resources(other students, course materials, those on the web and the teacher/facilitator). Experimentation with this tool was carried out with around thirty German and French first-year students at the Université de Lorraine in Metz (France). The projected changesin the groups' learning situations were as follows: - use the flipped classroom approach but with a few traditional presentations by the teacher (materials having been put on a server) and lots of collective case solving, - engage students in their learning by inviting them to set themselves a primary objective from the outset, e.g. “Assimilating 90% of the course”, and secondary objectives (like a to-do list) such as “create a new case study for Tuesday”, - encourage students to take control of their learning (knowing at all times where they stand and how far they still have to go), - develop cooperation: the tool should encourage group work, the search for common solutions and the exchange of the best solutions with other groups. Those who have advanced much faster than the others, or who already have expertise in a subject, can become tutors for the others. A student can also present a case study he or she has developed, for example, or share materials found on the web or produced by the group, as well as evaluating the productions of others, - etc… A questionnaire and analysis of assessment results showed that the test group made considerable progress compared with a similar control group. These results confirmed our hypotheses. Obviously, this tool is only effective if the organization of teaching is adapted and if teachers are willing to change the way they work.

Keywords: pedagogy, cooperation, university, learning environment

Procedia PDF Downloads 22
50 The Impact of Speech Style on the Production of Spanish Vowels by Spanish-English Bilinguals and Spanish Monolinguals

Authors: Vivian Franco

Abstract:

There has been a great deal of research about vowel production of second language learners of Spanish, vowel variation across Spanish dialects, and more recently, research related to Spanish heritage speakers’ vowel production based on speech style. However, there is little investigation reported on Spanish heritage speakers’ vowel production in regard to task modality by incorporating own comparison groups of monolinguals and late bilinguals. Thus, the present study investigates the influence of speech style on Spanish heritage speakers’ vowel production by comparing Spanish-English early and late bilinguals and Spanish monolinguals. The study was guided by the following research question: How do early bilinguals (heritage speakers) differ/relate to advanced L2 speakers of Spanish (late bilinguals) and Spanish monolinguals in their vowel quality (acoustic distribution) and quantity (duration) based on speech style? The participants were a total of 11 speakers of Spanish: 7 early Spanish-English bilinguals with a similar linguistic background (simultaneous bilinguals of the second generation); 2 advanced L2 speakers of Spanish; and 2 Spanish monolinguals from Mexico. The study consisted of two tasks. The first one adopted a semi-spontaneous style by a solicited narration of life experiences and a description of a favorite movie with the purpose to collect spontaneous speech. The second task was a reading activity in which the participants read two paragraphs of a Mexican literary essay 'La nuez.' This task aimed to obtain a more controlled speech style. From this study, it can be concluded that early bilinguals and monolinguals show a smaller formant vowel space overall compared to the late bilinguals in both speech styles. In terms of formant values by stress, the early bilinguals and the late bilinguals resembled in the semi-spontaneous speech style as their unstressed vowel space overlapped with that of the unstressed vowels different from the monolinguals who displayed a slightly reduced unstressed vowel space. For the controlled data, the early bilinguals were similar to the monolinguals as their stressed and unstressed vowel spaces overlapped in comparison to the late bilinguals who showed a more clear reduction of unstressed vowel space. In regard to stress, the monolinguals revealed longer vowel duration overall. However, findings of duration by stress showed that the early bilinguals and the monolinguals remained stable with shorter values of unstressed vowels in the semi-spontaneous data and longer duration in the controlled data when compared to the late bilinguals who displayed opposite results. These findings suggest an implication for Spanish heritage speakers and L2 Spanish vowels research as it has been frequently argued that Spanish bilinguals differ from the Spanish monolinguals by their vowel reduction and centralized vowel space influenced by English. However, some Spanish varieties are characterized by vowel reduction especially in certain phonetic contexts so that some vowels present more weakening than others. Consequently, it would not be conclusive to affirm an English influence on the Spanish of these bilinguals.

Keywords: Spanish-English bilinguals, Spanish monolinguals, spontaneous and controlled speech, vowel production.

Procedia PDF Downloads 129
49 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 144
48 Viability Analysis of a Centralized Hydrogen Generation Plant for Use in Oil Refining Industry

Authors: C. Fúnez Guerra, B. Nieto Calderón, M. Jaén Caparrós, L. Reyes-Bozo, A. Godoy-Faúndez, E. Vyhmeister

Abstract:

The global energy system is experiencing a change of scenery. Unstable energy markets, an increasing focus on climate change and its sustainable development is forcing businesses to pursue new solutions in order to ensure future economic growth. This has led to the interest in using hydrogen as an energy carrier in transportation and industrial applications. As an energy carrier, hydrogen is accessible and holds a high gravimetric energy density. Abundant in hydrocarbons, hydrogen can play an important role in the shift towards low-emission fossil value chains. By combining hydrogen production by natural gas reforming with carbon capture and storage, the overall CO2 emissions are significantly reduced. In addition, the flexibility of hydrogen as an energy storage makes it applicable as a stabilizer in the renewable energy mix. The recent development in hydrogen fuel cells is also raising the expectations for a hydrogen powered transportation sector. Hydrogen value chains exist to a large extent in the industry today. The global hydrogen consumption was approximately 50 million tonnes (7.2 EJ) in 2013, where refineries, ammonia, methanol production and metal processing were main consumers. Natural gas reforming produced 48% of this hydrogen, but without carbon capture and storage (CCS). The total emissions from the production reached 500 million tonnes of CO2, hence alternative production methods with lower emissions will be necessary in future value chains. Hydrogen from electrolysis is used for a wide range of industrial chemical reactions for many years. Possibly, the earliest use was for the production of ammonia-based fertilisers by Norsk Hydro, with a test reactor set up in Notodden, Norway, in 1927. This application also claims one of the world’s largest electrolyser installations, at Sable Chemicals in Zimbabwe. Its array of 28 electrolysers consumes 80 MW per hour, producing around 21,000 Nm3/h of hydrogen. These electrolysers can compete if cheap sources of electricity are available and natural gas for steam reforming is relatively expensive. Because electrolysis of water produces oxygen as a by-product, a system of Autothermal Reforming (ATR) utilizing this oxygen has been analyzed. Replacing the air separation unit with electrolysers produces the required amount of oxygen to the ATR as well as additional hydrogen. The aim of this paper is to evaluate the technical and economic potential of large-scale production of hydrogen for oil refining industry. Sensitivity analysis of parameters such as investment costs, plant operating hours, electricity price and sale price of hydrogen and oxygen are performed.

Keywords: autothermal reforming, electrolyser, hydrogen, natural gas, steam methane reforming

Procedia PDF Downloads 211
47 A Framework of Virtualized Software Controller for Smart Manufacturing

Authors: Pin Xiu Chen, Shang Liang Chen

Abstract:

A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.

Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing

Procedia PDF Downloads 82
46 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 312
45 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 79
44 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 208
43 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 170
42 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 573
41 Cricket Injury Surveillence by Mobile Application Technology on Smartphones

Authors: Najeebullah Soomro, Habib Noorbhai, Mariam Soomro, Ross Sanders

Abstract:

The demands on cricketers are increasing with more matches being played in a shorter period of time with a greater intensity. A ten year report on injury incidence for Australian elite cricketers between the 2000- 2011 seasons revealed an injury incidence rate of 17.4%.1. In the 2009–10 season, 24 % of Australian fast bowlers missed matches through injury. 1 Injury rates are even higher in junior cricketers with an injury incidence of 25% or 2.9 injuries per 100 player hours reported. 2 Traditionally, injury surveillance has relied on the use of paper based forms or complex computer software. 3,4 This makes injury reporting laborious for the staff involved. The purpose of this presentation is to describe a smartphone based mobile application as a means of improving injury surveillance in cricket. Methods: The researchers developed CricPredict mobile App for the Android platforms, the world’s most widely used smartphone platform. It uses Qt SDK (Software Development Kit) as IDE (Integrated Development Environment). C++ was used as the programming language with the Qt framework, which provides us with cross-platform abilities that will allow this app to be ported to other operating systems (iOS, Mac, Windows) in the future. The wireframes (graphic user interface) were developed using Justinmind Prototyper Pro Edition Version (Ver. 6.1.0). CricPredict enables recording of injury and training status conveniently and immediately. When an injury is reported automated follow-up questions include site of injury, nature of injury, mechanism of injury, initial treatment, referral and action taken after injury. Direct communication with the player then enables assessment of severity and diagnosis. CricPredict also allows the coach to maintain and track each player’s attendance at matches and training session. Workload data can also be recorded by either the player or coach by recording the number of balls bowled or played in a day. This is helpful in formulating injury rates and time lost due to injuries. All the data are stored at a secured password protected data server. Outcomes and Significance: Use of CricPredit offers a simple, user friendly tool for the coaching or medical staff associated with teams to predict, record and report injuries. This system will assist teams to capture injury data with ease thus allowing better understanding of injuries associated with cricket and potentially optimize the performance of such cricketers.

Keywords: injury, cricket, surveillance, smartphones, mobile

Procedia PDF Downloads 459
40 Identification of Potent and Selective SIRT7 Anti-Cancer Inhibitor via Structure-Based Virtual Screening and Molecular Dynamics Simulation

Authors: Md. Fazlul Karim, Ashik Sharfaraz, Aysha Ferdoushi

Abstract:

Background: Computational medicinal chemistry approaches are used for designing and identifying new drug-like molecules, predicting properties and pharmacological activities, and optimizing lead compounds in drug development. SIRT7, a nicotinamide adenine dinucleotide (NAD+)-dependent deacylase which regulates aging, is an emerging target for cancer therapy with mounting evidence that SIRT7 downregulation plays important roles in reversing cancer phenotypes and suppressing tumor growth. Activation or altered expression of SIRT7 is associated with the progression and invasion of various cancers, including liver, breast, gastric, prostate, and non-small cell lung cancer. Objectives: The goal of this work was to identify potent and selective bioactive candidate inhibitors of SIRT7 by in silico screening of small molecule compounds obtained from Nigella sativa (N. sativa). Methods: SIRT7 structure was retrieved from The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB), and its active site was identified using CASTp and metaPocket. Molecular docking simulation was performed with PyRx 0.8 virtual screening software. Drug-likeness properties were tested using SwissADME and pkCSM. In silico toxicity was evaluated by Osiris Property Explorer. Bioactivity was predicted by Molinspiration software. Antitumor activity was screened for Prediction of Activity Spectra for Substances (PASS) using Way2Drug web server. Molecular dynamics (MD) simulation was carried out by Desmond v3.6 package. Results: A total of 159 bioactive compounds from the N. Sativa were screened against the SIRT7 enzyme. Five bioactive compounds: chrysin (CID:5281607), pinocembrin (CID:68071), nigellidine (CID:136828302), nigellicine (CID:11402337), and epicatechin (CID:72276) were identified as potent SIRT7 anti-cancer candidates after docking score evaluation and applying Lipinski's Rule of Five. Finally, MD simulation identified Chrysin as the top SIRT7 anti-cancer candidate molecule. Conclusion: Chrysin, which shows a potential inhibitory effect against SIRT7, can act as a possible anti-cancer drug candidate. This inhibitor warrants further evaluation to check its pharmacokinetics and pharmacodynamics properties both in vitro and in vivo.

Keywords: SIRT7, antitumor, molecular docking, molecular dynamics simulation

Procedia PDF Downloads 79
39 The Role of Non-Governmental Organizations in Promoting Humanitarian Development: A Case Study in Saudi Arabia

Authors: Muamar Salameh, Rania Sinno

Abstract:

Non-governmental organizations in Saudi Arabia play a vital role in promoting humanitarian development. Though this paper will emphasize this role and will provide a specific case study on the role of Prince Mohammad Bin Fahd Foundation for Humanitarian Development, yet many organizations do not provide transparent information for the accomplishments of the NGOs. This study will provide answers to the main research question regarding this role that NGOs play in promoting humanitarian development. The recent law regulating associations and foundations in Saudi Arabia was issued in December 2015 and went into effect March 2016. Any new association or foundation will need to follow these regulations. Though the registration, implementation, and workflow of the organizations still need major improvement and development, yet, the currently-registered organizations have several notable achievements. Most of these organizations adopt a centralized administration approach which in many cases still hinders progress and may be an obstacle in achieving and reaching a larger population of beneficiaries. A large portion of the existing organizations are charities, some of which have some sort of government affiliation. The laws and regulations limit registration of new organizations. Any violations to Islamic Sharia, contradictions to public order, breach to national unity, foreign and foreign-affiliation organizations prohibits any organization from registration. The lack of transparency in the operations and inner-working of NGOs in Saudi Arabia is apparent for the public. However, the regulations invoke full transparency with the governing ministry. This transparency should be available to the public and in specific to the target population that are eligible to benefit from the NGOs services. In this study, we will provide an extensive review of all related laws, regulations, policies and procedures related to all NGOs in the Eastern Province of Saudi Arabia. This review will include some examples of current NGOs, services and target population. The study will determine the main accomplishments of reputable NGOs that have impacted positively the Saudi communities. The results will highlight and concentrate on actions, services and accomplishments that achieve sustainable assistance in promoting humanitarian development and advance living conditions of target populations of the Saudi community. In particular, we will concentrate on a case study related to PMFHD; one of the largest foundations in the Eastern Province of Saudi Arabia. The authors have access to the data related to this foundation and have access to the foundation administration to gather, analyze and conclude the findings of this group. The study will also analyze whether the practices, budgets, services and annual accomplishments of the foundation have fulfilled the humanitarian role of the foundation while meeting the governmental requirements, with an analysis in the light of the new laws. The findings of the study show that great accomplishments for advancing and promoting humanitarian development in Saudi community and international communities have been achieved. Several examples will be included from several NGOs, with specific examples from PMFHD.

Keywords: development, foundation, humanitarian, non-governmental organization, Saudi Arabia

Procedia PDF Downloads 296
38 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 107
37 Association of Nuclear – Mitochondrial Epistasis with BMI in Type 1 Diabetes Mellitus Patients

Authors: Agnieszka H. Ludwig-Slomczynska, Michal T. Seweryn, Przemyslaw Kapusta, Ewelina Pitera, Katarzyna Cyganek, Urszula Mantaj, Lucja Dobrucka, Ewa Wender-Ozegowska, Maciej T. Malecki, Pawel Wolkow

Abstract:

Obesity results from an imbalance between energy intake and its expenditure. Genome-Wide Association Study (GWAS) analyses have led to discovery of only about 100 variants influencing body mass index (BMI), which explain only a small portion of genetic variability. Analysis of gene epistasis gives a chance to discover another part. Since it was shown that interaction and communication between nuclear and mitochondrial genome are indispensable for normal cell function, we have looked for epistatic interactions between the two genomes to find their correlation with BMI. Methods: The analysis was performed on 366 T1DM patients using Illumina Infinium OmniExpressExome-8 chip and followed by imputation on Michigan Imputation Server. Only genes which influence mitochondrial functioning (listed in Human MitoCarta 2.0) were included in the analysis – variants of nuclear origin (MAF > 5%) in 1140 genes and 42 mitochondrial variants (MAF > 1%). Gene expression analysis was performed on GTex data. Association analysis between genetic variants and BMI was performed with the use of Linear Mixed Models as implemented in the package 'GENESIS' in R. Analysis of association between mRNA expression and BMI was performed with the use of linear models and standard significance tests in R. Results: Among variants involved in epistasis between mitochondria and nucleus we have identified one in mitochondrial transcription factor, TFB2M (rs6701836). It interacted with mitochondrial variants localized to MT-RNR1 (p=0.0004, MAF=15%), MT-ND2 (p=0.07, MAF=5%) and MT-ND4 (p=0.01, MAF=1.1%). Analysis of the interaction between nuclear variant rs6701836 (nuc) and rs3021088 localized to MT-ND2 mitochondrial gene (mito) has shown that the combination of the two led to BMI decrease (p=0.024). Each of the variants on its own does not correlate with higher BMI [p(nuc)=0.856, p(mito)=0.116)]. Although rs6701836 is intronic, it influences gene expression in the thyroid (p=0.000037). rs3021088 is a missense variant that leads to alanine to threonine substitution in the MT-ND2 gene which belongs to complex I of the electron transport chain. The analysis of the influence of genetic variants on gene expression has confirmed the trend explained above – the interaction of the two genes leads to BMI decrease (p=0.0308). Each of the mRNAs on its own is associated with higher BMI (p(mito)=0.0244 and p(nuc)=0.0269). Conclusıons: Our results show that nuclear-mitochondrial epistasis can influence BMI in T1DM patients. The correlation between transcription factor expression and mitochondrial genetic variants will be subject to further analysis.

Keywords: body mass index, epistasis, mitochondria, type 1 diabetes

Procedia PDF Downloads 175
36 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 156
35 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 129
34 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 203
33 State, Public Policies, and Rights: Public Expenditure and Social and Welfare Policies in America, as Opposed to Argentina

Authors: Mauro Cristeche

Abstract:

This paper approaches the intervention of the American State in the social arena and the modeling of the rights system from the Argentinian experience, by observing the characteristics of its federal budgetary system, the evolution of social public spending and welfare programs in recent years, labor and poverty statistics, and the changes on the labor market structure. The analysis seeks to combine different methodologies and sources: in-depth interviews with specialists, analysis of theoretical and mass-media material, and statistical sources. Among the results, it could be mentioned that the tendency to state interventionism (what has been called ‘nationalization of social life’) is quite evident in the United States, and manifests itself in multiple forms. The bibliography consulted, and the experts interviewed pointed out this increase of the state presence in historical terms (beyond short-term setbacks) in terms of increase of public spending, fiscal pressure, public employment, protective and control mechanisms, the extension of welfare policies to the poor sectors, etc. In fact, despite the significant differences between both countries, the United States and Argentina have common patterns of behavior in terms of the aforementioned phenomena. On the other hand, dissimilarities are also important. Some of them are determined by each country's own political history. The influence of political parties on the economic model seems more decisive in the United States than in Argentina, where the tendency to state interventionism is more stable. The centrality of health spending is evident in America, while in Argentina that discussion is more concentrated in the social security system and public education. The biggest problem of the labor market in the United States is the disqualification as a consequence of the technological development while in Argentina it is a result of its weakness. Another big difference is the huge American public spending on Defense. Then, the more federal character of the American State is also a factor of differential analysis against a centralized Argentine state. American public employment (around 10%) is comparatively quite lower than the Argentinian (around 18%). The social statistics show differences, but inequality and poverty have been growing as a trend in the last decades in both countries. According to public rates, poverty represents 14% in The United States and 33% in Argentina. American public spending is important (welfare spending and total public spending represent around 12% and 34% of GDP, respectively), but a bit lower than Latin-American or European average). In both cases, the tendency to underemployment and disqualification unemployment does not assume a serious gravity. Probably one of the most important aspects of the analysis is that private initiative and public intervention are much more intertwined in the United States, which makes state intervention more ‘fuzzy’, while in Argentina the difference is clearer. Finally, the power of its accumulation of capital and, more specifically, of the industrial and services sectors in the United States, which continues to be the engine of the economy, express great differences with Argentina, supported by its agro-industrial power and its public sector.

Keywords: state intervention, welfare policies, labor market, system of rights, United States of America

Procedia PDF Downloads 131
32 Traditional Rainwater Harvesting Systems: A Sustainable Solution for Non-Urban Populations in the Mediterranean

Authors: S. Fares, K. Mellakh, A. Hmouri

Abstract:

The StorMer project aims to set up a network of researchers to study traditional hydraulic rainwater harvesting systems in the Mediterranean basin, a region suffering from the major impacts of climate change and limited natural water resources. The arid and semi-arid Mediterranean basin has a long history of pioneering water management practices. The region has developed various ancient traditional water management systems, such as cisterns and qanats, to sustainably manage water resources under historical conditions of scarcity. Therefore, the StorMer project brings together Spain, France, Italy, Greece, Jordan and Morocco to explore traditional rainwater harvesting practices and systems in the Mediterranean region and to develop accurate modeling to simulate the performance and sustainability of these technologies under present-day climatic conditions. The ultimate goal of this project was to resuscitate and valorize these practices in the context of contemporary challenges. This project was intended to establish a Mediterranean network to serve as a basis for a more ambitious project. The ultimate objective was to analyze traditional hydraulic systems and create a prototype hydraulic ecosystem using a coupled environmental approach and traditional and ancient know-how, with the aim of reinterpreting them in the light of current techniques. The combination of ‘traditional’ and ‘modern knowledge/techniques’ is expected to lead to proposals for innovative hydraulic systems. The pandemic initially slowed our progress, but in the end it forced us to carry out the fieldwork in Morocco and Saudi Arabia, and so restart the project. With the participation of colleagues from chronologically distant fields (archaeology, sociology), we are now prepared to share our observations and propose the next steps. This interdisciplinary approach should give us a global vision of the project's objectives and challenges. A diachronic approach is needed to tackle the question of the long-term adaptation of societies in a Mediterranean context that has experienced several periods of water stress. The next stage of the StorMer project is the implementation of pilots in non-urbanized regions. These pilots will test the implementation of traditional systems and will be maintained and evaluated in terms of effectiveness, cost and acceptance. Based on these experiences, larger projects will be proposed and could provide information for regional water management policies. One of the most important lessons learned from this project is the highly social nature of managing traditional rainwater harvesting systems. Unlike modern, centralized water infrastructures, these systems often require the involvement of communities, which assume ownership and responsibility for them. This kind of community engagement leads to greater maintenance and, therefore, sustainability of the systems. Knowledge of the socio-cultural characteristics of these communities means that the systems can be adapted to the needs of each location, ensuring greater acceptance and efficiency.

Keywords: oasis, rainfall harvesting, arid regions, Mediterranean

Procedia PDF Downloads 40
31 Airon Project: IoT-Based Agriculture System for the Optimization of Irrigation Water Consumption

Authors: África Vicario, Fernando J. Álvarez, Felipe Parralejo, Fernando Aranda

Abstract:

The irrigation systems of traditional agriculture, such as gravity-fed irrigation, produce a great waste of water because, generally, there is no control over the amount of water supplied in relation to the water needed. The AIRON Project tries to solve this problem by implementing an IoT-based system to sensor the irrigation plots so that the state of the crops and the amount of water used for irrigation can be known remotely. The IoT system consists of a sensor network that measures the humidity of the soil, the weather conditions (temperature, relative humidity, wind and solar radiation) and the irrigation water flow. The communication between this network and a central gateway is conducted by means of long-range wireless communication that depends on the characteristics of the irrigation plot. The main objective of the AIRON project is to deploy an IoT sensor network in two different plots of the irrigation community of Aranjuez in the Spanish region of Madrid. The first plot is 2 km away from the central gateway, so LoRa has been used as the base communication technology. The problem with this plot is the absence of mains electric power, so devices with energy-saving modes have had to be used to maximize the external batteries' use time. An ESP32 SOC board with a LoRa module is employed in this case to gather data from the sensor network and send them to a gateway consisting of a Raspberry Pi with a LoRa hat. The second plot is located 18 km away from the gateway, a range that hampers the use of LoRa technology. In order to establish reliable communication in this case, the long-term evolution (LTE) standard is used, which makes it possible to reach much greater distances by using the cellular network. As mains electric power is available in this plot, a Raspberry Pi has been used instead of the ESP32 board to collect sensor data. All data received from the two plots are stored on a proprietary server located at the irrigation management company's headquarters. The analysis of these data by means of machine learning algorithms that are currently under development should allow a short-term prediction of the irrigation water demand that would significantly reduce the waste of this increasingly valuable natural resource. The major finding of this work is the real possibility of deploying a remote sensing system for irrigated plots by using Commercial-Off-The-Shelf (COTS) devices, easily scalable and adaptable to design requirements such as the distance to the control center or the availability of mains electrical power at the site.

Keywords: internet of things, irrigation water control, LoRa, LTE, smart farming

Procedia PDF Downloads 84
30 Price Control: A Comprehensive Step to Control Corruption in the Society

Authors: Muhammad Zia Ullah Baig, Atiq Uz Zama

Abstract:

The motivation of the project is to facilitate the governance body, as well as the common man in his/her daily life consuming product rates, to easily monitor the expense, to control the budget with the help of single SMS (message), e-mail facility, and to manage governance body by task management system. The system will also be capable of finding irregularities being done by the concerned department in mitigating the complaints generated by the customer and also provide a solution to overcome problems. We are building a system that easily controls the price control system of any country, we will feeling proud to give this system free of cost to Indian Government also. The system is able to easily manage and control the price control department of government all over the country. Price control department run in different cities under City District Government, so the system easily run in different cities with different SMS Code and decentralize Database ensure the non-functional requirement of system (scalability, reliability, availability, security, safety). The customer request for the government official price list with respect to his/her city SMS code (price list of all city available on website or application), the server will forward the price list through a SMS, if the product is not available according to the price list the customer generate a complaint through an SMS or using website/smartphone application, complaint is registered in complaint database and forward to inspection department when the complaint is entertained, the inspection department will forward a message about the complaint to customer. Inspection department physically checks the seller who does not follow the price list, but the major issue of the system is corruption, may be inspection officer will take a bribe and resolve the complaint (complaint is fake) in that case the customer will not use the system. The major issue of the system is to distinguish the fake and real complain and fight for corruption in the department. To counter the corruption, our strategy is to rank the complain if the same type of complaint is generated the complaint is in high rank and the higher authority will also notify about that complain, now the higher authority of department have reviewed the complaint and its history, the officer who resolve that complaint in past and the action against the complaint, these data will help in decision-making process, if the complaint was resolved because the officer takes bribe, the higher authority will take action against that officer. When the price of any good is decided the market/former representative is also there, with the mutual understanding of both party the price is decided, the system facilitate the decision-making process. The system shows the price history of any goods, inflation rate, available supply, demand, and the gap between supply and demand, these data will help to allot for the decision-making process.

Keywords: price control, goods, government, inspection, department, customer, employees

Procedia PDF Downloads 411
29 Sustainability in Space: Implementation of Circular Economy and Material Efficiency Strategies in Space Missions

Authors: Hamda M. Al-Ali

Abstract:

The ultimate aim of space exploration has been centralized around the possibility of life on other planets in the solar system. This aim is driven by the detrimental effects that climate change could potentially have on human survival on Earth in the future. This drives humans to search for feasible solutions to increase environmental and economical sustainability on Earth and to evaluate and explore the ability of human survival on other planets such as Mars. To do that, frequent space missions are required to meet the ambitious human goals. This means that reliable and affordable access to space is required, which could be largely achieved through the use of reusable spacecrafts. Therefore, materials and resources must be used wisely to meet the increasing demand. Space missions are currently extremely expensive to operate. However, reusing materials hence spacecrafts, can potentially reduce overall mission costs as well as the negative impact on both space and Earth environments. This is because reusing materials leads to less waste generated per mission, and therefore fewer landfill sites are required. Reusing materials reduces resource consumption, material production, and the need for processing new and replacement spacecraft and launch vehicle parts. Consequently, this will ease and facilitate human access to outer space as it will reduce the demand for scarce resources, which will boost material efficiency in the space industry. Material efficiency expresses the extent to which resources are consumed in the production cycle and how the waste produced by the industrial process is minimized. The strategies proposed in this paper to boost material efficiency in the space sector are the introduction of key performance indicators that are able to measure material efficiency as well as the introduction of clearly defined policies and legislation that can be easily implemented within the general practices in the space industry. Another strategy to improve material efficiency is by amplifying energy and resource efficiency through reusing materials. The circularity of various spacecraft materials such as Kevlar, steel, and aluminum alloys could be maximized through reusing them directly or after galvanizing them with another layer of material to act as a protective coat. This research paper has an aim to investigate and discuss how to improve material efficiency in space missions considering circular economy concepts so that space and Earth become more economically and environmentally sustainable. The circular economy is a transition from a make-use-waste linear model to a closed-loop socio-economic model, which is regenerative and restorative in nature. The implementation of a circular economy will reduce waste and pollution through maximizing material efficiency, ensuring that businesses can thrive and sustain. Further research into the extent to which reusable launch vehicles reduce space mission costs have been discussed, along with the environmental and economic implications it could have on the space sector and the environment. This has been examined through research and in-depth literature review of published reports, books, scientific articles, and journals. Keywords such as material efficiency, circular economy, reusable launch vehicles and spacecraft materials were used to search for relevant literature.

Keywords: circular economy, key performance indicator, material efficiency, reusable launch vehicles, spacecraft materials

Procedia PDF Downloads 125
28 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 33
27 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 101