Search results for: computer generated holograms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5587

Search results for: computer generated holograms

877 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 264
876 Best Practices to Enhance Patient Security and Confidentiality When Using E-Health in South Africa

Authors: Lethola Tshikose, Munyaradzi Katurura

Abstract:

Information and Communication Technology (ICT) plays a critical role in improving daily healthcare processes. The South African healthcare organizations have adopted Information Systems to integrate their patient records. This has made it much easier for healthcare organizations because patient information can now be accessible at any time. The primary purpose of this research study was to investigate the best practices that can be applied to enhance patient security and confidentiality when using e-health systems in South Africa. Security and confidentiality are critical in healthcare organizations as they ensure safety in EHRs. The research study used an inductive research approach that included a thorough literature review; therefore, no data was collected. The research paper’s scope included patient data and possible security threats associated with healthcare systems. According to the study, South African healthcare organizations discovered various patient data security and confidentiality issues. The study also revealed that when it comes to handling patient data, health professionals sometimes make mistakes. Some may not be computer literate, which posed issues and caused data to be tempered with. The research paper recommends that healthcare organizations ensure that security measures are adequately supported and promoted by their IT department. This will ensure that adequate resources are distributed to keep patient data secure and confidential. Healthcare organizations must correctly use standards set up by IT specialists to solve patient data security and confidentiality issues. Healthcare organizations must make sure that their organizational structures are adaptable to improve security and confidentiality.

Keywords: E-health, EHR, security, confidentiality, healthcare

Procedia PDF Downloads 51
875 Approaches to Ethical Hacking: A Conceptual Framework for Research

Authors: Lauren Provost

Abstract:

The digital world remains increasingly vulnerable, making the development of effective cybersecurity approaches even more critical in supporting the success of the digital economy and national security. Although approaches to cybersecurity have shifted and improved in the last decade with new models, especially with cloud computing and mobility, a record number of high severity vulnerabilities were recorded in the National Institute of Standards and Technology (NIST), and its National Vulnerability Database (NVD) in 2020. This is due, in part, to the increasing complexity of cyber ecosystems. Security must be approached with a more comprehensive, multi-tool strategy that addresses the complexity of cyber ecosystems, including the human factor. Ethical hacking has emerged as such an approach: a more effective, multi-strategy, comprehensive approach to cyber security's most pressing needs, especially understanding the human factor. Research on ethical hacking, however, is limited in scope. The two main objectives of this work are to (1) provide highlights of case studies in ethical hacking, (2) provide a conceptual framework for research in ethical hacking that embraces and addresses both technical and nontechnical security measures. Recommendations include an improved conceptual framework for research centered on ethical hacking that addresses many factors and attributes of significant attacks that threaten computer security; a more robust, integrative multi-layered framework embracing the complexity of cybersecurity ecosystems.

Keywords: ethical hacking, literature review, penetration testing, social engineering

Procedia PDF Downloads 212
874 Virtual Prototyping of LED Chip Scale Packaging Using Computational Fluid Dynamic and Finite Element Method

Authors: R. C. Law, Shirley Kang, T. Y. Hin, M. Z. Abdullah

Abstract:

LED technology has been evolving aggressively in recent years from incandescent bulb during older days to as small as chip scale package. It will continue to stay bright in future. As such, there is tremendous pressure to stay competitive in the market by optimizing products to next level of performance and reliability with the shortest time to market. This changes the conventional way of product design and development to virtual prototyping by means of Computer Aided Engineering (CAE). It comprises of the deployment of Finite Element Method (FEM) and Computational Fluid Dynamic (CFD). FEM accelerates the investigation for early detection of failures such as crack, improve the thermal performance of system and enhance solder joint reliability. CFD helps to simulate the flow pattern of molding material as a function of different temperature, molding parameters settings to evaluate failures like voids and displacement. This paper will briefly discuss the procedures and applications of FEM in thermal stress, solder joint reliability and CFD of compression molding in LED CSP. Integration of virtual prototyping in product development had greatly reduced the time to market. Many successful achievements with minimized number of evaluation iterations required in the scope of material, process setting, and package architecture variant have been materialized with this approach.

Keywords: LED, chip scale packaging (CSP), computational fluid dynamic (CFD), virtual prototyping

Procedia PDF Downloads 284
873 Educational Leadership and Artificial Intelligence

Authors: Sultan Ghaleb Aldaihani

Abstract:

- The environment in which educational leadership takes place is becoming increasingly complex due to factors like globalization and rapid technological change. - This is creating a "leadership gap" where the complexity of the environment outpaces the ability of leaders to effectively respond. - Educational leadership involves guiding teachers and the broader school system towards improved student learning and achievement. 2. Implications of Artificial Intelligence (AI) in Educational Leadership: - AI has great potential to enhance education, such as through intelligent tutoring systems and automating routine tasks to free up teachers. - AI can also have significant implications for educational leadership by providing better information and data-driven decision-making capabilities. - Computer-adaptive testing can provide detailed, individualized data on student learning that leaders can use for instructional decisions and accountability. 3. Enhancing Decision-Making Processes: - Statistical models and data mining techniques can help identify at-risk students earlier, allowing for targeted interventions. - Probability-based models can diagnose students likely to drop out, enabling proactive support. - These data-driven approaches can make resource allocation and decision-making more effective. 4. Improving Efficiency and Productivity: - AI systems can automate tasks and change processes to improve the efficiency of educational leadership and administration. - Integrating AI can free up leaders to focus more on their role's human, interactive elements.

Keywords: Education, Leadership, Technology, Artificial Intelligence

Procedia PDF Downloads 31
872 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 77
871 Analysys of Some Solutions to Protect the Tombolo of Giens

Authors: Yves Lacroix, Van Van Than, Didier Léandri, Pierre Liardet

Abstract:

The western Tombolo of the Giens peninsula in southern France, known as Almanarre beach, is subject to coastal erosion. We are trying to use computer simulation in order to propose solutions to stop this erosion. Our aim was first to determine the main factors for this erosion and successfully apply a coupled hydro-sedimentological numerical model based on observations and measurements that have been performed on the site for decades. We have gathered all available information and data about waves, winds, currents, tides, bathymetry, coastal line, and sediments concerning the site. These have been divided into two sets: one devoted to calibrating a numerical model using Mike 21 software, the other to serve as a reference in order to numerically compare the present situation to what it could be if we implemented different types of underwater constructions. This paper presents the first part of the study: selecting and melting different sources into a coherent data basis, identifying the main erosion factors, and calibrating the coupled software model against the selected reference period. Our results bring calibration of the numerical model with good fitting coefficients. They also show that the winter South-Western storm events conjugated to depressive weather conditions constitute a major factor of erosion, mainly due to wave impact in the northern part of the Almanarre beach. Together, current and wind impact is shown negligible.

Keywords: Almanarre beach, coastal erosion, hydro-sedimentological, numerical model

Procedia PDF Downloads 318
870 Sentiment Analysis of Tourist Online Reviews Concerning Lisbon Cultural Patrimony, as a Contribute to the City Attractiveness Evaluation

Authors: Joao Ferreira Do Rosario, Maria De Lurdes Calisto, Ana Teresa Machado, Nuno Gustavo, Rui Gonçalves

Abstract:

The tourism sector is increasingly important to the economic performance of countries and a relevant theme to academic research, increasing the importance of understanding how and why tourists evaluate tourism locations. The city of Lisbon is currently a tourist destination of excellence in the European and world-wide panorama, registering a significant growth of the economic weight of its tourist activities in the Gross Added Value of the region. Although there is research on the feedback of those who visit tourist sites and different methodologies for studying tourist sites have been applied, this research seeks to be innovative in the objective of obtaining insights on the competitiveness in terms of attractiveness of the city of Lisbon as a tourist destination, based the feedback of tourists in the Facebook pages of the most visited museums and monuments of Lisbon, an interpretation that is relevant in the development of strategies of tourist attraction. The intangible dimension of the tourism offer, due to its unique condition of simultaneous production and consumption, makes eWOM particularly relevant. The testimony of consumers is thus a decisive factor in the decision-making and buying process in tourism. Online social networks are one of the most used platforms for tourists to evaluate the attractiveness's points of a tourism destination (e.g. cultural and historical heritage), with this user-generated feedback enabling relevant information about the customer-tourists. This information is related to the tourist experience representing the true voice of the customer. Furthermore, this voice perceived by others as genuine, opposite to marketing messages, may have a powerful word-of-mouth influence on other potential tourists. The relevance of online reviews sharing, however, becomes particularly complex, considering social media users’ different profiles or the possible and different sources of information available, as well as their associated reputation associated with each source. In the light of these trends, our research focuses on the tourists’ feedback on Facebook pages of the most visited museums and monuments of Lisbon that contribute to its attractiveness as a tourism destination. Sentiment Analysis is the methodology selected for this research, using public available information in the online context, which was deemed as an appropriate non-participatory observation method. Data will be collected from two museums (Museu dos Coches and Museu de Arte Antiga) and three monuments ((Mosteiro dos Jerónimos, Torre de Belém and Panteão Nacional) Facebook pages during a period of one year. The research results will help in the evaluation of the considered places by the tourists, their contribution to the city attractiveness and present insights helpful for the management decisions regarding this museums and monuments. The results of this study will also contribute to a better knowledge of the tourism sector, namely the identification of attributes in the evaluation and choice of the city of Lisbon as a tourist destination. Further research will evaluate the Lisbon attraction points for tourists in different categories beyond museums and monuments, will also evaluate the tourist feedback from other sources like TripAdvisor and apply the same methodology in other cities and country regions.

Keywords: Lisbon tourism, opinion mining, sentiment analysis, tourism location attractiveness evaluation

Procedia PDF Downloads 233
869 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 296
868 The Relations between Language Diversity and Similarity and Adults' Collaborative Creative Problem Solving

Authors: Z. M. T. Lim, W. Q. Yow

Abstract:

Diversity in individual problem-solving approaches, culture and nationality have been shown to have positive effects on collaborative creative processes in organizational and scholastic settings. For example, diverse graduate and organizational teams consisting of members with both structured and unstructured problem-solving styles were found to have more creative ideas on a collaborative idea generation task than teams that comprised solely of members with either structured or unstructured problem-solving styles. However, being different may not always provide benefits to the collaborative creative process. In particular, speaking different languages may hinder mutual engagement through impaired communication and thus collaboration. Instead, sharing similar languages may have facilitative effects on mutual engagement in collaborative tasks. However, no studies have explored the relations between language diversity and adults’ collaborative creative problem solving. Sixty-four Singaporean English-speaking bilingual undergraduates were paired up into similar or dissimilar language pairs based on the second language they spoke (e.g., for similar language pairs, both participants spoke English-Mandarin; for dissimilar language pairs, one participant spoke English-Mandarin and the other spoke English-Korean). Each participant completed the Ravens Progressive Matrices Task individually. Next, they worked in pairs to complete a collaborative divergent thinking task where they used mind-mapping techniques to brainstorm ideas on a given problem together (e.g., how to keep insects out of the house). Lastly, the pairs worked on a collaborative insight problem-solving task (Triangle of Coins puzzle) where they needed to flip a triangle of ten coins around by moving only three coins. Pairs who had prior knowledge of the Triangle of Coins puzzle were asked to complete an equivalent Matchstick task instead, where they needed to make seven squares by moving only two matchsticks based on a given array of matchsticks. Results showed that, after controlling for intelligence, similar language pairs completed the collaborative insight problem-solving task faster than dissimilar language pairs. Intelligence also moderated these relations. Among adults of lower intelligence, similar language pairs solved the insight problem-solving task faster than dissimilar language pairs. These differences in speed were not found in adults with higher intelligence. No differences were found in the number of ideas generated in the collaborative divergent thinking task between similar language and dissimilar language pairs. In conclusion, sharing similar languages seem to enrich collaborative creative processes. These effects were especially pertinent to pairs with lower intelligence. This provides guidelines for the formation of groups based on shared languages in collaborative creative processes. However, the positive effects of shared languages appear to be limited to the insight problem-solving task and not the divergent thinking task. This could be due to the facilitative effects of other factors of diversity as found in previous literature. Background diversity, for example, may have a larger facilitative effect on the divergent thinking task as compared to the insight problem-solving task due to the varied experiences individuals bring to the task. In conclusion, this study contributes to the understanding of the effects of language diversity in collaborative creative processes and challenges the general positive effects that diversity has on these processes.

Keywords: bilingualism, diversity, creativity, collaboration

Procedia PDF Downloads 312
867 Comparison of Two Online Intervention Protocols on Reducing Habitual Upper Body Postures: A Randomized Trial

Authors: Razieh Karimian, Kim Burton, Mohammad Mehdi Naghizadeh, Maryam Karimian

Abstract:

Introduction: Habitual upper body postures are associated with online learning during the COVID-19 pandemic. This study explored whether adding an exercise routine to an ergonomic advice intervention improves these postures. Methods: In this randomized trial, 42 male adolescent students with a forward head posture were randomly divided into two equal groups, one allocated to ergonomic advice alone and the other to ergonomic advice plus an exercise routine. The angles of forward head, shoulder, and back postures were measured with a photogrammetric profile technique before and after the 8-week intervention period. Findings: During home quarantine, 76% of the students used their mobile phones, while 35% used a table-chair-computer for online learning. While significant reductions of the forward, shoulder, and back angles were found in both groups (P < 0.001), the effect was significantly greater in the exercise group (P < 0.001: forward head, shoulder, and back angles reduced by some 9, 6, and 5 degrees respectively, compared with 4 degrees in the forward head, and 2 degrees in the shoulder and back angles for ergonomic advice alone. Conclusion: The exercise routine produced a greater improvement in habitual upper body postures than ergonomic advice alone, a finding that may extend beyond online learning at home.

Keywords: randomized trial, online learning, adolescent, posture, exercise, ergonomic advice

Procedia PDF Downloads 62
866 Environmental Management Accounting Practices and Policies within the Higher Education Sector: An Exploratory Study of the University of KwaZulu Natal

Authors: Kiran Baldavoo, Mishelle Doorasamy

Abstract:

Universities have a role to play in the preservation of the environment, and the study attempted to evaluate the environmental management accounting (EMA) processes at UKZN. UKZN, a South African university, generates the same direct and indirect environmental impacts as the higher education sector worldwide. This is significant within the context of the South African environment which is constantly plagued by having to effectively manage the already scarce resources of water and energy, evident through the imposition of water and energy restrictions over the recent years. The study’s aim is to increase awareness of having a structured approach to environmental management in order to achieve the strategic environmental goals of the university. The research studied the experiences of key managers within UKZN, with the purpose of exploring the potential factors which influence the decision to adopt and apply EMA within the higher education sector. The study comprised two objectives, namely understanding the current state of accounting practices for managing major environmental costs and identifying factors influencing EMA adoption within the university. The study adopted a case study approach, comprising semi-structured interviews of key personnel involved in Management Accounting, Environmental Management, and Academic Schools within the university. Content analysis was performed on the transcribed interview data. A Theoretical Framework derived from literature was adopted to guide data collection and focus the study. Contingency and Institutional theory was the resultant basis of the derived framework. The findings of the first objective revealed that there was a distinct lack of EMA utilization within the university. There was no distinct policy on EMA, resulting in minimal environmental cost information being brought to the attention of senior management. The university embraced the principles of environmental sustainability; however, efforts to improve internal environmental accountability primarily from an accounting perspective was absent. The findings of the second objective revealed that five key barriers contributed to the lack of EMA utilization within the university. The barriers being attitudinal, informational, institutional, technological, and lack of incentives (financial). The results and findings of this study supported the use and application of EMA within the higher education sector. Participants concurred that EMA was underutilized and if implemented, would realize significant benefits for both the university and environment. Environmental management accounting is being widely acknowledged as a key management tool that can facilitate improved financial and environmental performance via the concept of enhanced environmental accountability. Historically research has been concentrated primarily on the manufacturing industry, due to it generating the greatest proportion of environmental impacts. Service industries are also an integral component of environmental management as they contribute significant environmental impacts, both direct and indirect. Educational institutions such as universities form part of the service sector and directly impact on the environment through the consumption of paper, energy, and water and solid waste generated, with the associated demands.

Keywords: environmental management accounting, environmental impacts, higher education, Southern Africa

Procedia PDF Downloads 118
865 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.

Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.

Procedia PDF Downloads 118
864 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory

Authors: Yi Wang, Greg Hearn, Terry Flew

Abstract:

In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.

Keywords: cultural policy, creative industries, creative city, triple Helix

Procedia PDF Downloads 203
863 Information Retrieval from Internet Using Hand Gestures

Authors: Aniket S. Joshi, Aditya R. Mane, Arjun Tukaram

Abstract:

In the 21st century, in the era of e-world, people are continuously getting updated by daily information such as weather conditions, news, stock exchange market updates, new projects, cricket updates, sports and other such applications. In the busy situation, they want this information on the little use of keyboard, time. Today in order to get such information user have to repeat same mouse and keyboard actions which includes time and inconvenience. In India due to rural background many people are not much familiar about the use of computer and internet also. Also in small clinics, small offices, and hotels and in the airport there should be a system which retrieves daily information with the minimum use of keyboard and mouse actions. We plan to design application based project that can easily retrieve information with minimum use of keyboard and mouse actions and make our task more convenient and easier. This can be possible with an image processing application which takes real time hand gestures which will get matched by system and retrieve information. Once selected the functions with hand gestures, the system will report action information to user. In this project we use real time hand gesture movements to select required option which is stored on the screen in the form of RSS Feeds. Gesture will select the required option and the information will be popped and we got the information. A real time hand gesture makes the application handier and easier to use.

Keywords: hand detection, hand tracking, hand gesture recognition, HSV color model, Blob detection

Procedia PDF Downloads 283
862 Analyzing the Investment Decision and Financing Method of the French Small and Medium-Sized Enterprises

Authors: Eliane Abdo, Olivier Colot

Abstract:

SMEs are always considered as a national priority due to their contribution to job creation, innovation and growth. Once the start-up phase is crossed with encouraging results, the company enters the phase of growth. In order to improve its competitiveness, maintain and increase its market share, the company is in the necessity even the obligation to develop its tangible and intangible investments. SMEs are generally closed companies with special and critical financial situation, limited resources and difficulty to access the capital markets; their shareholders are always living in a conflict between their independence and their need to increase capital that leads to the entry of new shareholder. The capital structure was always considered the core of research in corporate finance; moreover, the financial crisis and its repercussions on the credit’s availability, especially for SMEs make SME financing a hot topic. On the other hand, financial theories do not provide answers to capital structure’s questions; they offer tools and mode of financing that are more accessible to larger companies. Yet, SME’s capital structure can’t be independent of their governance structure. The classic financial theory supposes independence between the investment decision and the financing decision. Thus, investment determines the volume of funding, but not the split between internal or external funds. In this context, we find interesting to study the hypothesis that SMEs respond positively to the financial theories applied to large firms and to check if they are constrained by conventional solutions used by large companies. In this context, this research focuses on the analysis of the resource’s structure of SME in parallel with their investments’ structure, in order to highlight a link between their assets and liabilities structure. We founded our conceptual model based on two main theoretical frameworks: the Pecking order theory, and the Trade Off theory taking into consideration the SME’s characteristics. Our data were generated from DIANE database. Five hypotheses were tested via a panel regression to understand the type of dependence between the financing methods of 3,244 French SMEs and the development of their investment over a period of 10 years (2007-2016). The results show dependence between equity and internal financing in case of intangible investments development. Moreover, this type of business is constraint to financial debts since the guarantees provided are not sufficient to meet the banks' requirements. However, for tangible investments development, SMEs count sequentially on internal financing, bank borrowing, and new shares issuance or hybrid financing. This is compliant to the Pecking Order Theory. We, therefore, conclude that unlisted SMEs incur more financial debts to finance their tangible investments more than their intangible. However, they always prefer internal financing as a first choice. This seems to be confirmed by the assumption that the profitability of the company is negatively related to the increase of the financial debt. Thus, the Pecking Order Theory predictions seem to be the most plausible. Consequently, SMEs primarily rely on self-financing and then go, into debt as a priority to finance their financial deficit.

Keywords: capital structure, investments, life cycle, pecking order theory, trade off theory

Procedia PDF Downloads 108
861 Biomimetic Dinitrosyl Iron Complexes: A Synthetic, Structural, and Spectroscopic Study

Authors: Lijuan Li

Abstract:

Nitric oxide (NO) has become a fascinating entity in biological chemistry over the past few years. It is a gaseous lipophilic radical molecule that plays important roles in several physiological and pathophysiological processes in mammals, including activating the immune response, serving as a neurotransmitter, regulating the cardiovascular system, and acting as an endothelium-derived relaxing factor. NO functions in eukaryotes both as a signal molecule at nanomolar concentrations and as a cytotoxic agent at micromolar concentrations. The latter arises from the ability of NO to react readily with a variety of cellular targets leading to thiol S-nitrosation, amino acid N-nitrosation, and nitrosative DNA damage. Nitric oxide can readily bind to metals to give metal-nitrosyl (M-NO) complexes. Some of these species are known to play roles in biological NO storage and transport. These complexes have different biological, photochemical, or spectroscopic properties due to distinctive structural features. These recent discoveries have spawned a great interest in the development of transition metal complexes containing NO, particularly its iron complexes that are central to the role of nitric oxide in the body. Spectroscopic evidence would appear to implicate species of “Fe(NO)2+” type in a variety of processes ranging from polymerization, carcinogenesis, to nitric oxide stores. Our research focuses on isolation and structural studies of non-heme iron nitrosyls that mimic biologically active compounds and can potentially be used for anticancer drug therapy. We have shown that reactions between Fe(NO)2(CO)2 and a series of imidazoles generated new non-heme iron nitrosyls of the form Fe(NO)2(L)2 [L = imidazole, 1-methylimidazole, 4-methylimidazole, benzimidazole, 5,6-dimethylbenzimidazole, and L-histidine] and a tetrameric cluster of [Fe(NO)2(L)]4 (L=Im, 4-MeIm, BzIm, and Me2BzIm), resulted from the interactions of Fe(NO)2 with a series of substituted imidazoles was prepared. Recently, a series of sulfur bridged iron di nitrosyl complexes with the general formula of [Fe(µ-RS)(NO)2]2 (R = n-Pr, t-Bu, 6-methyl-2-pyridyl, and 4,6-dimethyl-2-pyrimidyl), were synthesized by the reaction of Fe(NO)2(CO)2 with thiols or thiolates. Their structures and properties were studied by IR, UV-vis, 1H-NMR, EPR, electrochemistry, X-ray diffraction analysis and DFT calculations. IR spectra of these complexes display one weak and two strong NO stretching frequencies (νNO) in solution, but only two strong νNO in solid. DFT calculations suggest that two spatial isomers of these complexes bear 3 Kcal energy difference in solution. The paramagnetic complexes [Fe2(µ-RS)2(NO)4]-, have also been investigated by EPR spectroscopy. Interestingly, the EPR spectra of complexes exhibit an isotropic signal of g = 1.998 - 2.004 without hyperfine splitting. The observations are consistent with the results of calculations, which reveal that the unpaired electron dominantly delocalize over the two sulfur and two iron atoms. The difference of the g values between the reduced form of iron-sulfur clusters and the typical monomeric di nitrosyl iron complexes is explained, for the first time, by of the difference in unpaired electron distributions between the two types of complexes, which provides the theoretical basis for the use of g value as a spectroscopic tool to differentiate these biologically active complexes.

Keywords: di nitrosyl iron complex, metal nitrosyl, non-heme iron, nitric oxide

Procedia PDF Downloads 301
860 The Post-Hegemony of Post-Capitalism: Towards a Political Theory of Open Cooperativism

Authors: Vangelis Papadimitropoulos

Abstract:

The paper is part of the research project “Techno-Social Innovation in the Collaborative Economy'', funded by the Hellenic Foundation of Research and Innovation for the years 2022-2024. The research project examines the normative and empirical conditions of grassroots technologically driven innovation, potentially enabling the transition towards a commons-oriented post-capitalist economy. The project carries out a conceptually led and empirically grounded multi-case study of the digital commons, open-source technologies, platform cooperatives, open cooperatives and Distributed Autonomous Organizations (DAOs) on the Blockchain. The methodological scope of research is interdisciplinary inasmuch as it comprises political theory, economics, sustainability science and computer science, among others. The research draws specifically on Michel Bauwens and Vasilis Kostakis' model of open cooperativism between the commons, ethical market entities and a partner state. Bauwens and Kostakis advocate for a commons-based counter-hegemonic post-capitalist transition beyond and against neoliberalism. The research further employs Laclau and Mouffe's discourse theory of hegemony to introduce a post-hegemonic conceptualization of the model of open cooperativism. Thus, the paper aims to outline the theoretical contribution of the research project to contemporary political theory debates on post-capitalism and the collaborative economy.

Keywords: open cooperativism, techno-social innovation, post-hegemony, post-capitalism

Procedia PDF Downloads 62
859 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 87
858 Effect of Fill Material Density under Structures on Ground Motion Characteristics Due to Earthquake

Authors: Ahmed T. Farid, Khaled Z. Soliman

Abstract:

Due to limited areas and excessive cost of land for projects, backfilling process has become necessary. Also, backfilling will be done to overcome the un-leveling depths or raising levels of site construction, especially near the sea region. Therefore, backfilling soil materials used under the foundation of structures should be investigated regarding its effect on ground motion characteristics, especially at regions subjected to earthquakes. In this research, 60-meter thickness of sandy fill material was used above a fixed 240-meter of natural clayey soil underlying by rock formation to predict the modified ground motion characteristics effect at the foundation level. Comparison between the effect of using three different situations of fill material compaction on the recorded earthquake is studied, i.e. peak ground acceleration, time history, and spectra acceleration values. The three different densities of the compacted fill material used in the study were very loose, medium dense and very dense sand deposits, respectively. Shake computer program was used to perform this study. Strong earthquake records, with Peak Ground Acceleration (PGA) of 0.35 g, were used in the analysis. It was found that, higher compaction of fill material thickness has a significant effect on eliminating the earthquake ground motion properties at surface layer of fill material, near foundation level. It is recommended to consider the fill material characteristics in the design of foundations subjected to seismic motions. Future studies should be analyzed for different fill and natural soil deposits for different seismic conditions.

Keywords: acceleration, backfill, earthquake, soil, PGA

Procedia PDF Downloads 376
857 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 221
856 Sustainability in Space: Implementation of Circular Economy and Material Efficiency Strategies in Space Missions

Authors: Hamda M. Al-Ali

Abstract:

The ultimate aim of space exploration has been centralized around the possibility of life on other planets in the solar system. This aim is driven by the detrimental effects that climate change could potentially have on human survival on Earth in the future. This drives humans to search for feasible solutions to increase environmental and economical sustainability on Earth and to evaluate and explore the ability of human survival on other planets such as Mars. To do that, frequent space missions are required to meet the ambitious human goals. This means that reliable and affordable access to space is required, which could be largely achieved through the use of reusable spacecrafts. Therefore, materials and resources must be used wisely to meet the increasing demand. Space missions are currently extremely expensive to operate. However, reusing materials hence spacecrafts, can potentially reduce overall mission costs as well as the negative impact on both space and Earth environments. This is because reusing materials leads to less waste generated per mission, and therefore fewer landfill sites are required. Reusing materials reduces resource consumption, material production, and the need for processing new and replacement spacecraft and launch vehicle parts. Consequently, this will ease and facilitate human access to outer space as it will reduce the demand for scarce resources, which will boost material efficiency in the space industry. Material efficiency expresses the extent to which resources are consumed in the production cycle and how the waste produced by the industrial process is minimized. The strategies proposed in this paper to boost material efficiency in the space sector are the introduction of key performance indicators that are able to measure material efficiency as well as the introduction of clearly defined policies and legislation that can be easily implemented within the general practices in the space industry. Another strategy to improve material efficiency is by amplifying energy and resource efficiency through reusing materials. The circularity of various spacecraft materials such as Kevlar, steel, and aluminum alloys could be maximized through reusing them directly or after galvanizing them with another layer of material to act as a protective coat. This research paper has an aim to investigate and discuss how to improve material efficiency in space missions considering circular economy concepts so that space and Earth become more economically and environmentally sustainable. The circular economy is a transition from a make-use-waste linear model to a closed-loop socio-economic model, which is regenerative and restorative in nature. The implementation of a circular economy will reduce waste and pollution through maximizing material efficiency, ensuring that businesses can thrive and sustain. Further research into the extent to which reusable launch vehicles reduce space mission costs have been discussed, along with the environmental and economic implications it could have on the space sector and the environment. This has been examined through research and in-depth literature review of published reports, books, scientific articles, and journals. Keywords such as material efficiency, circular economy, reusable launch vehicles and spacecraft materials were used to search for relevant literature.

Keywords: circular economy, key performance indicator, material efficiency, reusable launch vehicles, spacecraft materials

Procedia PDF Downloads 122
855 Systematic Review of Dietary Fiber Characteristics Relevant to Appetite and Energy Intake Outcomes in Clinical Intervention Trials of Healthy Humans

Authors: K. S. Poutanen, P. Dussort, A. Erkner, S. Fiszman, K. Karnik, M. Kristensen, C. F. M. Marsaux, S. Miquel-Kergoat, S. Pentikäinen, P. Putz, R. E. Steinert, J. Slavin, D. J. Mela

Abstract:

Dietary fiber (DF) intake has been associated with lower body weight or less weight gain. These effects are generally attributed to putative effects of DF on appetite. Many intervention studies have tested the effect of DFs on appetite-related measures, with inconsistent results. However, DF includes a wide category of different compounds with diverse chemical and physical characteristics, and correspondingly diverse effects in human digestion. Thus, inconsistent results between DF consumption and appetite are not surprising. The specific contribution of different compounds with varying physico-chemical properties to appetite control and the mediating mechanisms are not well characterized. This systematic review aimed to assess the influence of specific DF characteristics, including viscosity, gel forming capacity, fermentability, and molecular weight, on appetite-related outcomes in healthy humans. Medline and FSTA databases were searched for controlled human intervention trials, testing the effects of well-characterized DFs on subjective satiety/appetite or energy intake outcomes. Studies were included only if they reported: 1) fiber name and origin, and 2) data on viscosity, gelling properties, fermentability, or molecular weight of the DF materials tested. The search generated 3001 unique records, 322 of which were selected for further consideration from title and abstract screening. Of these, 149 were excluded due to insufficient fiber characterization and 124 for other reasons (not original article, not randomized controlled trial, or no appetite related outcome), leaving 49 papers meeting all the inclusion criteria, most of which reported results from acute testing (<1 day). The eligible 49 papers described 90 comparisons of DFs in foods, beverages or supplements. DF-containing material of interest was efficacious for at least one appetite-related outcome in 51/90 comparisons. Gel-forming DF sources were most consistently efficacious but there were no clear associations between viscosity, MW or fermentability and appetite-related outcomes. A considerable number of papers had to be excluded from the review due to shortcomings in fiber characterization. To build understanding about the impact of DF on satiety/appetite specifically there should be clear hypotheses about the mechanisms behind the proposed beneficial effect of DF material on appetite, and sufficient data about the DF properties relevant for the hypothesized mechanisms to justify clinical testing. The hypothesized mechanisms should also guide the decision about relevant duration of exposure in studies, i.e. are the effects expected to occur during acute time frame (related to stomach emptying, digestion rate, etc.) or develop from sustained exposure (gut fermentation mediated mechanisms). More consistent measurement methods and reporting of fiber specifications and characterization are needed to establish reliable structure-function relationships for DF and health outcomes.

Keywords: appetite, dietary fiber, physico-chemical properties, satiety

Procedia PDF Downloads 229
854 Application of GPRS in Water Quality Monitoring System

Authors: V. Ayishwarya Bharathi, S. M. Hasker, J. Indhu, M. Mohamed Azarudeen, G. Gowthami, R. Vinoth Rajan, N. Vijayarangan

Abstract:

Identification of water quality conditions in a river system based on limited observations is an essential task for meeting the goals of environmental management. The traditional method of water quality testing is to collect samples manually and then send to laboratory for analysis. However, it has been unable to meet the demands of water quality monitoring today. So a set of automatic measurement and reporting system of water quality has been developed. In this project specifies Water quality parameters collected by multi-parameter water quality probe are transmitted to data processing and monitoring center through GPRS wireless communication network of mobile. The multi parameter sensor is directly placed above the water level. The monitoring center consists of GPRS and micro-controller which monitor the data. The collected data can be monitor at any instant of time. In the pollution control board they will monitor the water quality sensor data in computer using Visual Basic Software. The system collects, transmits and processes water quality parameters automatically, so production efficiency and economy benefit are improved greatly. GPRS technology can achieve well within the complex environment of poor water quality non-monitored, and more specifically applicable to the collection point, data transmission automatically generate the field of water analysis equipment data transmission and monitoring.

Keywords: multiparameter sensor, GPRS, visual basic software, RS232

Procedia PDF Downloads 405
853 An Analytical Approach to Assess and Compare the Vulnerability Risk of Operating Systems

Authors: Pubudu K. Hitigala Kaluarachchilage, Champike Attanayake, Sasith Rajasooriya, Chris P. Tsokos

Abstract:

Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occurring. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux are assessed for their discovered vulnerabilities and the risk associated with each. Each discovered and reported vulnerability has an exploitability score assigned in CVSS score of the national vulnerability database. In this study the risk from vulnerabilities in each of the five Operating Systems is compared. Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability. Statistical methodology and underlying mathematical approach is described. Initially, parametric procedures are conducted and measured. There were, however, violations of some statistical assumptions observed. Therefore the need for non-parametric approaches was recognized. 6838 vulnerabilities recorded were considered in the analysis. According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk levels for some operating systems, indicating that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant test results revealing a statistically significant difference in the Risk levels of different OSs are presented.

Keywords: cybersecurity, Markov chain, non-parametric analysis, vulnerability, operating system

Procedia PDF Downloads 180
852 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.

Keywords: clustering, k-mers, longest common subsequence, SOM

Procedia PDF Downloads 263
851 Sustainability of Vernacular Architecture in Zegalli Houses in Northern Iran with Emphasis on Their Seismic Behavior

Authors: Mona Zaryoun, Mahmood Hosseini, Seyed Mohammad Hassan Khalkhali, Haniyeh Okhovat

Abstract:

Zegalli houses in Guilan province, northern Iran, are a type of vernacular houses which their foundation, skeleton and walls all have been made of wood. The only houses which could survive the major Manjil-Rudbar earthquake of 1990 with a magnitude of 7.2 were these houses. Regarding this fact, some researchers started thinking of this type of foundations used in these houses to benefit from rocking-wise behavior. On the one hand, the relatively light weight of the houses, have helped these houses to withstand well against seismic excitations. In this paper at first a brief description of Zegalli houses and their architectural features, with emphasis on their foundation is presented. in the next stage foundation of one of these houses is modeled as a sample by a using a computer program, which has been developed in MATLAB environment, and by using the horizontal and vertical accelerograms of a set of selected site compatible earthquakes, a series of time history analysis (THA) are carried out to investigate the behavior of this type of houses against earthquake. Based on numerical results of THA it can be said that even without no sliding at the foundation timbers, only due to the rocking which occurs in various levels of the foundation the seismic response of the house is significantly reduced., which results in their stability subjected to earthquakes with peak ground acceleration of around 0.35g. Therefore, it can be recommended the Zegalli houses are considered as sustainable Iranian vernacular architecture, and it can be recommended that the use of these houses and their architecture and their structural merits are reconsidered by architects as well as civil and structural engineers.

Keywords: MATLAB software, rocking behavior, time history analysis, Zegalli houses

Procedia PDF Downloads 284
850 Sustainable Urbanism: Model for Social Equity through Sustainable Development

Authors: Ruchira Das

Abstract:

The major Metropolises of India are resultant of Colonial manifestation of Production, Consumption and Sustenance. These cities grew, survived, and sustained on the basic whims of Colonial Power and Administrative Agendas. They were symbols of power, authority and administration. Within them some Colonial Towns remained as small towns within the close vicinity of the major metropolises and functioned as self–sufficient units until peripheral development due to tremendous pressure occurred in the metropolises. After independence huge expansion in Judiciary and Administration system resulted City Oriented Employment. A large number of people started residing within the city or within commutable distance of the city and it accelerated expansion of the cities. Since then Budgetary and Planning expenditure brought a new pace in Economic Activities. Investment in Industry and Agriculture sector generated opportunity of employment which further led towards urbanization. After two decades of Budgetary and Planning economic activities in India, a new era started in metropolitan expansion. Four major metropolises started further expansion rapidly towards its suburbs. A concept of large Metropolitan Area developed. Cities became nucleus of suburbs and rural areas. In most of the cases such expansion was not favorable to the relationship between City and its hinterland due to absence of visualization of Compact Sustainable Development. The search for solutions needs to weigh the choices between Rural and Urban based development initiatives. Policymakers need to focus on areas which will give the greatest impact. The impact of development initiatives will spread the significant benefit to all. There is an assumption that development integrates Economic, Social and Environmental considerations with equal weighing. The traditional narrower and almost exclusive focus on economic criteria as the determinant of the level of development is thus re–described and expanded. The Social and Environmental aspects are equally important as Economic aspect to achieve Sustainable Development. The arrangement of opportunities for Public, Semi – Public facilities for its citizen is very much relevant to development. It is responsibility of the administration to provide opportunities for the basic requirement of its inhabitants. Development should be in terms of both Industrial and Agricultural to maintain a balance between city and its hinterland. Thus, policy is to formulate shifting the emphasis away from Economic growth towards Sustainable Human Development. The goal of Policymaker should aim at creating environments in which people’s capabilities can be enhanced by the effective dynamic and adaptable policy. The poverty could not be eradicated simply by increasing income. The improvement of the condition of the people would have to lead to an expansion of basic human capabilities. In this scenario the suburbs/rural areas are considered as environmental burden to the metropolises. A new living has to be encouraged in the suburban or rural. We tend to segregate agriculture from the city and city life, this leads to over consumption, but this urbanism model attempts both these to co–exists and hence create an interesting overlapping of production and consumption network towards sustainable Rurbanism.

Keywords: socio–economic progress, sustainability, social equity, urbanism

Procedia PDF Downloads 304
849 NEOM Coast from Intertidal to Sabkha Systems: A Geological Overview

Authors: Mohamed Abouelresh, Subhajit Kumar, Lamidi Babalola, Septriandi Chan, Ali Al Musabeh A., Thadickal V. Joydas, Bruno Pulido

Abstract:

Neom has a relatively long coastline on the Red Sea and the Gulf of Aqaba, which is about 300 kilometres long, in addition to many naturally formed bays along the Red Sea coast. Undoubtedly, these coasts provide an excellent opportunity for tourism and other activities; however, these coastal areas host a wide range of salinity-dependent ecosystems that need to be protected. The main objective of the study was to identify the coastal features, including tidal flats and salt flats, along the NEOM coast. A base map of the study area generated from the satellite images contained the main landform features and, in particular, the boundaries of the inland and coastal sabkhas. A field survey was conducted to map and characterize the intertidal and sabkha landforms. The coastal and inner coastal areas of NEOM are mainly covered by the quaternary sediments, which include gravel sheets, terraces, raised reef limestone, evaporite successions, eolian dunes, and undifferentiated sand/gravel deposits (alluvium, alluvial outwash, wind-blown sand beach). There are different landforms that characterizes the NEOM coast, including rocky coast, tidal zone, and sabkha. Sabkha area ranges between a few to tens of square kilometers. Coastal sabkha extended across the shoreline of NEOM, specifically at Gayal and Sharma areas, while the continental sabkha only existed at Gayal Town. The inland Sabkha at Gayal is mainly composed of a thin (15-25 cm) evaporite crust composed of a dark brown, cavernous, rugged, pitted, colloidal salty sand layer with salt-tolerant vegetation. The inland Sabkha is considered a groundwater-driven sedimentary system as indicated by syndepositional intra-sediment capillary evaporites, which precipitate in both marine and continental salt flats. Gayal coastal Sabkha is made up of tidal inlets, tidal creeks, and lagoons followed in a landward direction with well-developed sabkha layers. The surface sediments of the coastal Sabkha are composed of unlithified calcareous, gypsiferous, coarse to medium sands, and silt with bioclastic fragments underlain by several organic-rich layers. The coastal flat is graded landward into widespread, flat vegetated Sabkhas dissected by tributaries of the fluvial system, which debouches to the Red Sea. The coast from Gayal to Magna through Ras El-Sheikh Humaid is continuously subjected to tidal flows, which create an intertidal depositional system. The intertidal flats at NEOM are extensive, nearly horizontal land forming a very dynamic system in which several physical, chemical, geomorphological, and biological processes are acting simultaneously. The current work provides a field-based identification of the coastal sabkha and intertidal sites at NEOM. However, the mutual interaction between tidal flows and sabkha development, particularly at Gayal, needs to be well understood through comprehensive field and lab analysis.

Keywords: coast, intertidal, deposition, sabkha

Procedia PDF Downloads 76
848 The Internet of Things: A Survey of Authentication Mechanisms, and Protocols, for the Shifting Paradigm of Communicating, Entities

Authors: Nazli Hardy

Abstract:

Multidisciplinary application of computer science, interactive database-driven web application, the Internet of Things (IoT) represents a digital ecosystem that has pervasive technological, social, and economic, impact on the human population. It is a long-term technology, and its development is built around the connection of everyday objects, to the Internet. It is estimated that by 2020, with billions of people connected to the Internet, the number of connected devices will exceed 50 billion, and thus IoT represents a paradigm shift in in our current interconnected ecosystem, a communication shift that will unavoidably affect people, businesses, consumers, clients, employees. By nature, in order to provide a cohesive and integrated service, connected devices need to collect, aggregate, store, mine, process personal and personalized data on individuals and corporations in a variety of contexts and environments. A significant factor in this paradigm shift is the necessity for secure and appropriate transmission, processing and storage of the data. Thus, while benefits of the applications appear to be boundless, these same opportunities are bounded by concerns such as trust, privacy, security, loss of control, and related issues. This poster and presentation look at a multi-factor authentication (MFA) mechanisms that need to change from the login-password tuple to an Identity and Access Management (IAM) model, to the more cohesive to Identity Relationship Management (IRM) standard. It also compares and contrasts messaging protocols that are appropriate for the IoT ecosystem.

Keywords: Internet of Things (IoT), authentication, protocols, survey

Procedia PDF Downloads 296