Search results for: multi-layer neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3864

Search results for: multi-layer neural networks

1614 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 121
1613 The Use of Network Theory in Heritage Cities

Authors: J. L. Oliver, T. Agryzkov, L. Tortosa, J. Vicent, J. Santacruz

Abstract:

This paper aims to demonstrate how the use of Network Theory can be applied to a very interesting and complex urban situation: The parts of a city which may have some patrimonial value, but because of their lack of relevant architectural elements, they are not considered to be historic in a conventional sense. In this paper, we use the suburb of La Villaflora in the city of Quito, Ecuador as our case study. We first propose a system of indicators as a tool to characterize and quantify the historic value of a geographic area. Then, we apply these indicators to the suburb of La Villaflora and use Network Theory to understand and propose actions.

Keywords: graphs, mathematics, networks, urban studies

Procedia PDF Downloads 369
1612 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 163
1611 Evaluating the Social Learning Processes Involved in Developing Community-Informed Wildfire Risk Reduction Strategies in the Prince Albert Forest Management Area

Authors: Carly Madge, Melanie Zurba, Ryan Bullock

Abstract:

The Boreal Forest has experienced some of the most drastic climate change-induced temperature rises in Canada, with average winter temperatures increasing by 3°C since 1948. One of the main concerns of the province of Saskatchewan, and particularly wildfire managers, is the increased risk of wildfires due to climate change. With these concerns in mind Sakaw Askiy Management Inc., a forestry corporation located in Prince Albert, Saskatchewan with operations in the Boreal Forest biome, is developing wildfire risk reduction strategies that are supported by the shareholders of the corporation as well as the stakeholders of the Prince Albert Forest Management Area (which includes citizens, hunters, trappers, cottage owners, and outfitters). In the past, wildfire management strategies implemented through harvesting have been received with skepticism by some community members of Prince Albert. Engagement of the stakeholders of the Prince Albert Management Area through the development of the wildfire risk reduction strategies aims to reduce this skepticism and rebuild some of the trust that has been lost between industry and community. This research project works with the framework of social learning, which is defined as the learning that occurs when individuals come together to form a group with the purpose of understanding environmental challenges and determining appropriate responses to them. The project evaluates the social learning processes that occur through the development of the risk reduction strategies and how the learning has allowed Sakaw to work towards implementing the strategies into their forest harvesting plans. The incorporation of wildfire risk reduction strategies works to increase the adaptive capacity of Sakaw, which in this case refers to the ability to adjust to climate change, moderate potential damages, take advantage of opportunities, and cope with consequences. Using semi-structured interviews and wildfire workshop meetings shareholders and stakeholders shared their knowledge of wildfire, their main wildfire concerns, and changes they would like to see made in the Prince Albert Forest Management Area. Interviews and topics discussed in the workshops were inductively coded for themes related to learning, adaptive capacity, areas of concern, and preferred methods of wildfire risk reduction strategies. Analysis determined that some of the learning that has occurred has resulted through social interactions and the development of networks oriented towards wildfire and wildfire risk reduction strategies. Participants have learned new knowledge and skills regarding wildfire risk reduction. The formation of wildfire networks increases access to information on wildfire and the social capital (trust and strengthened relations) of wildfire personnel. Both factors can be attributed to increases in adaptive capacity. Interview results were shared with the General Manager of Sakaw, where the areas of concern and preferred strategies of wildfire risk reduction will be considered and accounted for in the implementation of new harvesting plans. This research also augments the growing conceptual and empirical evidence of the important role of learning and networks in regional wildfire risk management efforts.

Keywords: adaptive capacity, community-engagement, social learning, wildfire risk reduction

Procedia PDF Downloads 145
1610 Learning to Translate by Learning to Communicate to an Entailment Classifier

Authors: Szymon Rutkowski, Tomasz Korbak

Abstract:

We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.

Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning

Procedia PDF Downloads 127
1609 Navigating Neural Pathways to Success with Students on the Autism Spectrum

Authors: Panda Krouse

Abstract:

This work is a marriage of the science of Applied Behavioral Analysis and an educator’s look at Neuroscience. The focus is integrating what we know about the anatomy of the brain in autism and evidence-based practices in education. It is a bold attempt to present links between neurological research and the application of evidence-based practices in education. In researching for this work, no discovery of articles making these connections was made. Consideration of the areas of structural differences in the brain are aligned with evidence-based strategies. A brief literary review identifies how identified areas affect overt behavior, which is what, as educators, is what we can see and measure. Giving further justification and validation of our practices in education from a second scientific field is significant for continued improvement in intervention for students on the autism spectrum.

Keywords: autism, evidence based practices, neurological differences, education intervention

Procedia PDF Downloads 64
1608 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 153
1607 Spatial Cognition and 3-Dimensional Vertical Urban Design Guidelines

Authors: Hee Sun (Sunny) Choi, Gerhard Bruyns, Wang Zhang, Sky Cheng, Saijal Sharma

Abstract:

The main focus of this paper is to propose a comprehensive framework for the cognitive measurement and modelling of the built environment. This will involve exploring and measuring neural mechanisms. The aim is to create a foundation for further studies in this field that are consistent and rigorous. Additionally, this framework will facilitate collaboration with cognitive neuroscientists by establishing a shared conceptual basis. The goal of this research is to develop a human-centric approach for urban design that is scientific and measurable, producing a set of urban design guidelines that incorporate cognitive measurement and modelling. By doing so, the broader intention is to design urban spaces that prioritize human needs and well-being, making them more liveable.

Keywords: vertical urbanism, human centric design, spatial cognition and psychology, vertical urban design guidelines

Procedia PDF Downloads 81
1606 Muslims in Diaspora Negotiating Islam through Muslim Public Sphere and the Role of Media

Authors: Sabah Khan

Abstract:

The idea of universal Islam tends to exaggerate the extent of homogeneity in Islamic beliefs and practices across Muslim communities. In the age of migration, various Muslim communities are in diaspora. The immediate implication of this is what happens to Islam in diaspora? How Islam gets represented in new forms? Such pertinent questions need to be dealt with. This paper shall draw on the idea of religious transnationalism, primarily transnational Islam. There are multiple ways to conceptualize transnational phenomenon with reference to Islam in terms of flow of people, transnational organizations and networks; Ummah oriented solidarity and the new Muslim public sphere. This paper specifically deals with the new Muslim public sphere. It primarily refers to the space and networks enabled by new media and communication technologies, whereby Muslim identity and Islamic normativity are rehearsed, debated by people in different locales. A new sense of public is emerging across Muslim communities, which needs to be contextualized. This paper uses both primary and secondary data. Primary data elicited through content analysis of audio-visuals on social media and secondary sources of information ranging from books, articles, journals, etc. The basic aim of the paper is to focus on the emerging Muslim public sphere and the role of media in expanding public spheres of Islam. It also explores how Muslims in diaspora negotiate Islam and Islamic practices through media and the new Muslim public sphere. This paper cogently weaves in discussions firstly, of re-intellectualization of Islamic discourse in the public sphere. In other words, how Muslims have come to reimagine their collective identity and critically look at fundamental principles and authoritative tradition. Secondly, the emerging alternative forms of Islam by young Muslims in diaspora. In other words, how young Muslims search for unorthodox ways and media for religious articulation, including music, clothing and TV. This includes transmission and distribution of Islam in diaspora in terms of emerging ‘media Islam’ or ‘soundbite Islam’. The new Muslim public sphere has offered an arena to a large number of participants to critically engage with Islam, which leads not only to a critical engagement with traditional forms of Islamic authority but also emerging alternative forms of Islam and Islamic practices.

Keywords: Islam, media, Muslims, public sphere

Procedia PDF Downloads 269
1605 Opportunities and Challenges of Digital Diplomacy in the Public Diplomacy of the Islamic Republic of Iran

Authors: Somayeh Pashaee

Abstract:

The ever-increasing growth of the Internet and the development of information and communication technology have prompted the politicians of different countries to use virtual networks as an efficient tool for their foreign policy. The communication of governments and countries, even in the farthest places from each other, through electronic networks, has caused vast changes in the way of statecraft and governance. Importantly, in the meantime, diplomacy, which is always based on information and communication, has been affected by the new prevailing conditions and new technologies more than other areas and has faced greater changes. The emergence of virtual space and the formation of new communication tools in the field of public diplomacy has led to the redefinition of the framework of diplomacy and politics in the international arena and the appearance of a new aspect of diplomacy called digital diplomacy. Digital diplomacy is in the concept of changing relations from a face-to-face and traditional way to a non-face-to-face and new way, and its purpose is to solve foreign policy issues using virtual space. Digital diplomacy, by affecting diplomatic procedures and its change, explains the role of technology in the visualization and implementation of diplomacy in different ways. The purpose of this paper is to investigate the position of digital diplomacy in the public diplomacy of the Islamic Republic of Iran. The paper tries to answer these two questions in a descriptive-analytical way, considering the progress of communication and the role of virtual space in the service of diplomacy, what is the approach of the Islamic Republic of Iran towards digital diplomacy and the use of a new way of establishing foreign relations in public diplomacy? What capacities and damages are facing the country after the use of this type of new diplomacy? In this paper, various theoretical concepts in the field of public diplomacy and modern diplomacy, including Geoff Berridge, Charles Kegley, Hans Tuch and Ronald Peter Barston, as well as the theoretical framework of Marcus Holmes on digital diplomacy, will be used as a conceptual basis to support the analysis. As a result, in order to better achieve the political goals of the country, especially in foreign policy, the approach of the Islamic Republic of Iran to public diplomacy with a focus on digital diplomacy should be strengthened and revised. Today, only emphasizing on advancing diplomacy through traditional methods may weaken Iran's position in the public opinion level from other countries.

Keywords: digital diplomacy, public diplomacy, islamic republic of Iran, foreign policy, opportunities and challenges

Procedia PDF Downloads 113
1604 Features Reduction Using Bat Algorithm for Identification and Recognition of Parkinson Disease

Authors: P. Shrivastava, A. Shukla, K. Verma, S. Rungta

Abstract:

Parkinson's disease is a chronic neurological disorder that directly affects human gait. It leads to slowness of movement, causes muscle rigidity and tremors. Gait serve as a primary outcome measure for studies aiming at early recognition of disease. Using gait techniques, this paper implements efficient binary bat algorithm for an early detection of Parkinson's disease by selecting optimal features required for classification of affected patients from others. The data of 166 people, both fit and affected is collected and optimal feature selection is done using PSO and Bat algorithm. The reduced dataset is then classified using neural network. The experiments indicate that binary bat algorithm outperforms traditional PSO and genetic algorithm and gives a fairly good recognition rate even with the reduced dataset.

Keywords: parkinson, gait, feature selection, bat algorithm

Procedia PDF Downloads 543
1603 Bayesian Networks Scoping the Climate Change Impact on Winter Wheat Freezing Injury Disasters in Hebei Province, China

Authors: Xiping Wang,Shuran Yao, Liqin Dai

Abstract:

Many studies report the winter is getting warmer and the minimum air temperature is obviously rising as the important climate warming evidences. The exacerbated air temperature fluctuation tending to bring more severe weather variation is another important consequence of recent climate change which induced more disasters to crop growth in quite a certain regions. Hebei Province is an important winter wheat growing province in North of China that recently endures more winter freezing injury influencing the local winter wheat crop management. A winter wheat freezing injury assessment Bayesian Network framework was established for the objectives of estimating, assessing and predicting winter wheat freezing disasters in Hebei Province. In this framework, the freezing disasters was classified as three severity degrees (SI) among all the three types of freezing, i.e., freezing caused by severe cold in anytime in the winter, long extremely cold duration in the winter and freeze-after-thaw in early season after winter. The factors influencing winter wheat freezing SI include time of freezing occurrence, growth status of seedlings, soil moisture, winter wheat variety, the longitude of target region and, the most variable climate factors. The climate factors included in this framework are daily mean and range of air temperature, extreme minimum temperature and number of days during a severe cold weather process, the number of days with the temperature lower than the critical temperature values, accumulated negative temperature in a potential freezing event. The Bayesian Network model was evaluated using actual weather data and crop records at selected sites in Hebei Province using real data. With the multi-stage influences from the various factors, the forecast and assessment of the event-based target variables, freezing injury occurrence and its damage to winter wheat production, were shown better scoped by Bayesian Network model.

Keywords: bayesian networks, climatic change, freezing Injury, winter wheat

Procedia PDF Downloads 407
1602 Fractal-Wavelet Based Techniques for Improving the Artificial Neural Network Models

Authors: Reza Bazargan lari, Mohammad H. Fattahi

Abstract:

Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for pre-processing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based pre-processing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.

Keywords: wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN

Procedia PDF Downloads 368
1601 Comparison of Frequency-Domain Contention Schemes in Wireless LANs

Authors: Li Feng

Abstract:

In IEEE 802.11 networks, it is well known that the traditional time-domain contention often leads to low channel utilization. The first frequency-domain contention scheme, the time to frequency (T2F), has recently been proposed to improve the channel utilization and has attracted a great deal of attention. In this paper, we survey the latest research progress on the weighed frequency-domain contention. We present the basic ideas, work principles of these related schemes and point out their differences. This paper is very useful for further study on frequency-domain contention.

Keywords: 802.11, wireless LANs, frequency-domain contention, T2F

Procedia PDF Downloads 458
1600 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 433
1599 Analysis of Waiting Time and Drivers Fatigue at Manual Toll Plaza and Suggestion of an Automated Toll Tax Collection System

Authors: Muhammad Dawood Idrees, Maria Hafeez, Arsalan Ansari

Abstract:

Toll tax collection is the earliest method of tax collection and revenue generation. This revenue is utilized for the development of roads networks, maintenance, and connecting to roads and highways across the country. Pakistan is one of the biggest countries, covers a wide area of land, roads networks, and motorways are important source of connecting cities. Every day millions of people use motorways, and they have to stop at toll plazas to pay toll tax as majority of toll plazas are manually collecting toll tax. The purpose of this study is to calculate the waiting time of vehicles at Karachi Hyderabad (M-9) motorway. As Karachi is the biggest city of Pakistan and hundreds of thousands of people use this route to approach other cities. Currently, toll tax collection is manual system which is a major cause for long time waiting at toll plaza. This study calculates the waiting time of vehicles, fuel consumed in waiting time, manpower employed at toll plaza as all process is manual, and it also leads to mental and physical fatigue of driver. All wastages of sources are also calculated, and a most feasible automatic toll tax collection system is proposed which is not only beneficial to reduce waiting time but also beneficial in reduction of fuel, reduction of manpower employed, and reduction in physical and mental fatigue. A cost comparison in terms of wastages is also shown between manual and automatic toll tax collection system (E-Z Pass). Results of this study reveal that, if automatic tool collection system is implemented at Karachi to Hyderabad motorway (M-9), there will be a significance reduction in waiting time of vehicles, which leads to reduction of fuel consumption, environmental pollution, mental and physical fatigue of driver. All these reductions are also calculated in terms of money (Pakistani rupees) and it is obtained that millions of rupees can be saved by using automatic tool collection system which will lead to improve the economy of country.

Keywords: toll tax collection, waiting time, wastages, driver fatigue

Procedia PDF Downloads 146
1598 DeClEx-Processing Pipeline for Tumor Classification

Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba

Abstract:

Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.

Keywords: machine learning, healthcare, classification, explainability

Procedia PDF Downloads 54
1597 Transnational Initiatives, Local Perspectives: The Potential of Australia-Asia BRIDGE School Partnerships Project to Support Teacher Professional Development in India

Authors: Atiya Khan

Abstract:

Recent research on the condition of school education in India has reaffirmed the importance of quality teacher professional development, especially in light of the rapid changes in teaching methods, learning theories, curriculum, and major shifts in information and technology that education systems are experiencing around the world. However, the quality of programs of teacher professional development in India is often uneven, in some cases non-existing. The educational authorities in India have long recognized this and have developed a range of programs to assist in-service teacher education. But, these programs have been mostly inadequate at improving the quality of teachers in India. Policy literature and reports indicate that the unevenness of these programs and more generally the lack of quality teacher professional development in India are due to factors such as a large number of teachers, budgetary constraints, top-down decision making, teacher overload, lack of infrastructure, and little or no follow-up. The disparity between the government stated goals for quality teacher professional development in India and its inability to meet the learning needs of teachers suggests that new interventions are needed. The realization that globalization has brought about an increase in the social, cultural, political and economic interconnectedness between countries has also given rise to transnational opportunities for education systems, such as India’s, aiming to build their capacity to support teacher professional development. Moreover, new developments in communication technologies seem to present a plausible means of achieving high-quality professional development for teachers through the creation of social learning spaces, such as transnational learning networks. This case study investigates the potential of one such transnational learning network to support the quality of teacher professional development in India, namely the Australia-Asia BRIDGE School Partnerships Project. It explores the participation of some fifteen teachers and their principals from BRIDGE participating schools in Delhi region of India; focusing on their professional development expectations from the BRIDGE program and account for their experiences in the program, in order to determine the program’s potential for the professional development of teachers in this study.

Keywords: case study, Australia-Asia BRIDGE Project, teacher professional development, transnational learning networks

Procedia PDF Downloads 264
1596 Inter-Complex Dependence of Production Technique and Preforms Construction on the Failure Pattern of Multilayer Homo-Polymer Composites

Authors: Ashraf Nawaz Khan, R. Alagirusamy, Apurba Das, Puneet Mahajan

Abstract:

The thermoplastic-based fibre composites are acquiring a market sector of conventional as well as thermoset composites. However, replacing the thermoset with a thermoplastic composite has never been an easy task. The inherent high viscosity of thermoplastic resin reveals poor interface properties. In this work, a homo-polymer towpreg is produced through an electrostatic powder spray coating methodology. The produced flexible towpreg offers a low melt-flow distance during the consolidation of the laminate. The reduced melt-flow distance demonstrates a homogeneous fibre/matrix distribution (and low void content) on consolidation. The composite laminate has been fabricated with two manufacturing techniques such as conventional film stack (FS) and powder-coated (PC) technique. This helps in understanding the distinct response of produced laminates on applying load since the laminates produced through the two techniques are comprised of the same constituent fibre and matrix (constant fibre volume fraction). The changed behaviour is observed mainly due to the different fibre/matrix configurations within the laminate. The interface adhesion influences the load transfer between the fibre and matrix. Therefore, it influences the elastic, plastic, and failure patterns of the laminates. Moreover, the effect of preform geometries (plain weave and satin weave structure) are also studied for corresponding composite laminates in terms of various mechanical properties. The fracture analysis is carried out to study the effect of resin at the interlacement points through micro-CT analysis. The PC laminate reveals a considerably small matrix-rich and deficient zone in comparison to the FS laminate. The different load tensile, shear, fracture toughness, and drop weight impact test) is applied to the laminates, and corresponding damage behaviour is analysed in the successive stage of failure. The PC composite has shown superior mechanical properties in comparison to the FS composite. The damage that occurs in the laminate is captured through the SEM analysis to identify the prominent mode of failure, such as matrix cracking, fibre breakage, delamination, debonding, and other phenomena.

Keywords: composite, damage, fibre, manufacturing

Procedia PDF Downloads 136
1595 Application of an Artificial Neural Network to Determine the Risk of Malignant Tumors from the Images Resulting from the Asymmetry of Internal and External Thermograms of the Mammary Glands

Authors: Amdy Moustapha Drame, Ilya V. Germashev, E. A. Markushevskaya

Abstract:

Among the main problems of medicine is breast cancer, from which a significant number of women around the world are constantly dying. Therefore, the detection of malignant breast tumors is an urgent task. For many years, various technologies for detecting these tumors have been used, in particular, in thermal imaging in order to determine different levels of breast cancer development. These periodic screening methods are a diagnostic tool for women and may have become an alternative to older methods such as mammography. This article proposes a model for the identification of malignant neoplasms of the mammary glands by the asymmetry of internal and external thermal imaging fields.

Keywords: asymmetry, breast cancer, tumors, deep learning, thermogram, convolutional transformation, classification

Procedia PDF Downloads 59
1594 A Bottom-Up Approach for the Synthesis of Highly Ordered Fullerene-Intercalated Graphene Hybrids

Authors: A. Kouloumpis, P. Zygouri, G. Potsi, K. Spyrou, D. Gournis

Abstract:

Much of the research effort on graphene focuses on its use as building block for the development of new hybrid nanostructures with well-defined dimensions and behavior suitable for applications among else in gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biology. Towards this aim, here we describe a new bottom-up approach, which combines the self-assembly with the Langmuir Schaefer technique, for the production of fullerene-intercalated graphene hybrid materials. This new method uses graphene nanosheets as a template for the grafting of various fullerene C60 molecules (pure C60, bromo-fullerenes, C60Br24, and fullerols, C60(OH)24) in a bi-dimensional array, and allows for perfect layer-by-layer growth with control at the molecular level. Our film preparation approach involves a bottom-up layer-by-layer process that includes the formation of a hybrid organo-graphene Langmuir film hosting fullerene molecules within its interlayer spacing. A dilute water solution of chemically oxidized graphene (GO) was used as subphase on the Langmuir-Blodgett deposition system while an appropriate amino surfactant (that binds covalently with the GO) was applied for the formation of hybridized organo-GO. After the horizontal lift of a hydrophobic substrate, a surface modification of the GO platelets was performed by bringing the surface of the transferred Langmuir film in contact with a second amino surfactant solution (capable to interact strongly with the fullerene derivatives). In the final step, the hybrid organo-graphene film was lowered in the solution of the appropriate fullerene derivative. Multilayer films were constructed by repeating this procedure. Hybrid fullerene-based thin films deposited on various hydrophobic substrates were characterized by X-ray diffraction (XRD) and X-ray reflectivity (XRR), FTIR, and Raman spectroscopies, Atomic Force Microscopy, and optical measurements. Acknowledgments. This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF)‐Research Funding Program: THALES. Investing in knowledge society through the European Social Fund (no. 377285).

Keywords: hybrids, graphene oxide, fullerenes, langmuir-blodgett, intercalated structures

Procedia PDF Downloads 326
1593 Human Brain Organoids-on-a-Chip Systems to Model Neuroinflammation

Authors: Feng Guo

Abstract:

Human brain organoids, 3D brain tissue cultures derived from human pluripotent stem cells, hold promising potential in modeling neuroinflammation for a variety of neurological diseases. However, challenges remain in generating standardized human brain organoids that can recapitulate key physiological features of a human brain. Here, this study presents a series of organoids-on-a-chip systems to generate better human brain organoids and model neuroinflammation. By employing 3D printing and microfluidic 3D cell culture technologies, the study’s systems enable the reliable, scalable, and reproducible generation of human brain organoids. Compared with conventional protocols, this study’s method increased neural progenitor proliferation and reduced heterogeneity of human brain organoids. As a proof-of-concept application, the study applied this method to model substance use disorders.

Keywords: human brain organoids, microfluidics, organ-on-a-chip, neuroinflammation

Procedia PDF Downloads 200
1592 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 183
1591 The Role of Oral and Intestinal Microbiota in European Badgers

Authors: Emma J. Dale, Christina D. Buesching, Kevin R. Theis, David W. Macdonald

Abstract:

This study investigates the oral and intestinal microbiomes of wild-living European badgers (Meles meles) and will relate inter-individual differences to social contact networks, somatic and reproductive fitness, varying susceptibility to bovine tuberculous (bTB) and to the olfactory advertisement. Badgers are an interesting model for this research, as they have great variation in body condition, despite living in complex social networks and having access to the same resources. This variation in somatic fitness, in turn, affects breeding success, particularly in females. We postulate that microbiota have a central role to play in determining the successfulness of an individual. Our preliminary results, characterising the microbiota of individual badgers, indicate unique compositions of microbiota communities within social groups of badgers. This basal information will inform further questions related to the extent microbiota influence fitness. Hitherto, the potential role of microbiota has not been considered in determining host condition, but also other key fitness variables, namely; communication and resistance to disease. Badgers deposit their faeces in communal latrines, which play an important role in olfactory communication. Odour profiles of anal and subcaudal gland secretions are highly individual-specific and encode information about group-membership and fitness-relevant parameters, and their chemical composition is strongly dependent on symbiotic microbiota. As badgers sniff/ lick (using their Vomeronasal organ) and over-mark faecal deposits of conspecifics, these microbial communities can be expected to vary with social contact networks. However, this is particularly important in the context of bTB, where badgers are assumed to transmit bTB to cattle as well as conspecifics. Interestingly, we have found that some individuals are more susceptible to bTB than are others. As acquired immunity and thus potential susceptibility to infectious diseases are known to depend also on symbiotic microbiota in other members of the mustelids, a role of particularly oral microbiota can currently not be ruled out as a potential explanation for inter-individual differences in infection susceptibility of bTB in badgers. Tri annually badgers are caught in the context of a long-term population study that began in 1987. As all badgers receive an individual tattoo upon first capture, age, natal as well as previous and current social group-membership and other life history parameters are known for all animals. Swabs (subcaudal ‘scent gland’, anal, genital, nose, mouth and ear) and fecal samples will be taken from all individuals, stored at -80oC until processing. Microbial samples will be processed and identified at Wayne State University’s Theis (Host-Microbe Interactions) Lab, using High Throughput Sequencing (16S rRNA-encoding gene amplification and sequencing). Acknowledgments: Gas-Chromatography/ Mass-spectrometry (in the context of olfactory communication) analyses will be performed through an established collaboration with Dr. Veronica Tinnesand at Telemark University, Norway.

Keywords: communication, energetics, fitness, free-ranging animals, immunology

Procedia PDF Downloads 186
1590 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 123
1589 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video

Authors: Nidhal K. Azawi, John M. Gauch

Abstract:

Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.

Keywords: colonoscopy classification, feature extraction, image alignment, machine learning

Procedia PDF Downloads 250
1588 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 119
1587 Building a Blockchain-based Internet of Things

Authors: Rob van den Dam

Abstract:

Today’s Internet of Things (IoT) comprises more than a billion intelligent devices, connected via wired/wireless communications. The expected proliferation of hundreds of billions more places us at the threshold of a transformation sweeping across the communications industry. Yet, we found that the IoT architecture and solutions that currently work for billions of devices won’t necessarily scale to tomorrow’s hundreds of billions of devices because of high cost, lack of privacy, not future-proof, lack of functional value and broken business models. As the IoT scales exponentially, decentralized networks have the potential to reduce infrastructure and maintenance costs to manufacturers. Decentralization also promises increased robustness by removing single points of failure that could exist in traditional centralized networks. By shifting the power in the network from the center to the edges, devices gain greater autonomy and can become points of transactions and economic value creation for owners and users. To validate the underlying technology vision, IBM jointly developed with Samsung Electronics the autonomous decentralized peer-to- peer proof-of-concept (PoC). The primary objective of this PoC was to establish a foundation on which to demonstrate several capabilities that are fundamental to building a decentralized IoT. Though many commercial systems in the future will exist as hybrid centralized-decentralized models, the PoC demonstrated a fully distributed proof. The PoC (a) validated the future vision for decentralized systems to extensively augment today’s centralized solutions, (b) demonstrated foundational IoT tasks without the use of centralized control, (c) proved that empowered devices can engage autonomously in marketplace transactions. The PoC opens the door for the communications and electronics industry to further explore the challenges and opportunities of potential hybrid models that can address the complexity and variety of requirements posed by the internet that continues to scale. Contents: (a) The new approach for an IoT that will be secure and scalable, (b) The three foundational technologies that are key for the future IoT, (c) The related business models and user experiences, (d) How such an IoT will create an 'Economy of Things', (e) The role of users, devices, and industries in the IoT future, (f) The winners in the IoT economy.

Keywords: IoT, internet, wired, wireless

Procedia PDF Downloads 335
1586 An Approach towards Smart Future: Ict Infrastructure Integrated into Urban Water Networks

Authors: Ahsan Ali, Mayank Ostwal, Nikhil Agarwal

Abstract:

Abstract—According to a World Bank report, millions of people across the globe still do not have access to improved water services. With uninterrupted growth of cities and urban inhabitants, there is a mounting need to safeguard the sustainable expansion of cities. Efficient functioning of the urban components and high living standards of the residents are needed to be ensured. The water and sanitation network of an urban development is one of its most essential parts of its critical infrastructure. The growth in urban population is leading towards increased water demand, and thus, the local water resources are severely strained. 'Smart water' is referred to water and waste water infrastructure that is able to manage the limited resources and the energy used to transport it. It enables the sustainable consumption of water resources through co-ordinate water management system, by integrating Information Communication Technology (ICT) solutions, intended at maximizing the socioeconomic benefits without compromising the environmental values. This paper presents a case study from a medium sized city in North-western Pakistan. Currently, water is getting contaminated due to the proximity between water and sewer pipelines in the study area, leading to public health issues. Due to unsafe grey water infiltration, the scarce ground water is also getting polluted. This research takes into account the design of smart urban water network by integrating ICT (Information and Communication Technology) with urban water network. The proximity between the existing water supply network and sewage network is analyzed and a design of new water supply system is proposed. Real time mapping of the existing urban utility networks will be projected with the help of GIS applications. The issue of grey water infiltration is addressed by providing sustainable solutions with the help of locally available materials, keeping in mind the economic condition of the area. To deal with the current growth of urban population, it is vital to develop new water resources. Hence, distinctive and cost effective procedures to harness rain water would be suggested as a part of the research study experiment.

Keywords: GIS, smart water, sustainability, urban water management

Procedia PDF Downloads 214
1585 A Semantic E-Learning and E-Assessment System of Learners

Authors: Wiem Ben Khalifa, Dalila Souilem, Mahmoud Neji

Abstract:

The evolutions of Social Web and Semantic Web lead us to ask ourselves about the way of supporting the personalization of learning by means of intelligent filtering of educational resources published in the digital networks. We recommend personalized courses of learning articulated around a first educational course defined upstream. Resuming the context and the stakes in the personalization, we also suggest anchoring the personalization of learning in a community of interest within a group of learners enrolled in the same training. This reflection is supported by the display of an active and semantic system of learning dedicated to the constitution of personalized to measure courses and in the due time.

Keywords: Semantic Web, semantic system, ontology, evaluation, e-learning

Procedia PDF Downloads 332