Search results for: extracting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 403

Search results for: extracting

73 Gestalt in Music and Brain: A Non-Linear Chaos Based Study with Detrended/Adaptive Fractal Analysis

Authors: Shankha Sanyal, Archi Banerjee, Sayan Biswas, Sourya Sengupta, Sayan Nag, Ranjan Sengupta, Dipak Ghosh

Abstract:

The term ‘gestalt’ has been widely used in the field of psychology which defined the perception of human mind to group any object not in part but as a 'unified' whole. Music, in general, is polyphonic - i.e. a combination of a number of pure tones (frequencies) mixed together in a manner that sounds harmonious. The study of human brain response due to different frequency groups of the acoustic signal can give us an excellent insight regarding the neural and functional architecture of brain functions. Hence, the study of music cognition using neuro-biosensors is becoming a rapidly emerging field of research. In this work, we have tried to analyze the effect of different frequency bands of music on the various frequency rhythms of human brain obtained from EEG data. Four widely popular Rabindrasangeet clips were subjected to Wavelet Transform method for extracting five resonant frequency bands from the original music signal. These frequency bands were initially analyzed with Detrended/Adaptive Fractal analysis (DFA/AFA) methods. A listening test was conducted on a pool of 100 respondents to assess the frequency band in which the music becomes non-recognizable. Next, these resonant frequency bands were presented to 20 subjects as auditory stimulus and EEG signals recorded simultaneously in 19 different locations of the brain. The recorded EEG signals were noise cleaned and subjected again to DFA/AFA technique on the alpha, theta and gamma frequency range. Thus, we obtained the scaling exponents from the two methods in alpha, theta and gamma EEG rhythms corresponding to different frequency bands of music. From the analysis of music signal, it is seen that loss of recognition is proportional to the loss of long range correlation in the signal. From the EEG signal analysis, we obtain frequency specific arousal based response in different lobes of brain as well as in specific EEG bands corresponding to musical stimuli. In this way, we look to identify a specific frequency band beyond which the music becomes non-recognizable and below which in spite of the absence of other bands the music is perceivable to the audience. This revelation can be of immense importance when it comes to the field of cognitive music therapy and researchers of creativity.

Keywords: AFA, DFA, EEG, gestalt in music, Hurst exponent

Procedia PDF Downloads 332
72 Exploring the Use of Mobile Technologies in Schools in Oman; Opportunities and Challenges

Authors: Muna Al-Siyabi

Abstract:

When students bring mobile devices into the classrooms, they are frequently viewed as distractions from their daily educational practices rather than developing the twenty-first century skills. Such skills may involve sorting and extracting information, solving problems and evaluating results. Mobile devices, such as smartphones and tablets, have great potential for learning. Currently, schools and universities are embracing these devices with the aim of enhancing education. In Oman, mobile technologies have been introduced in the last ten years in two private schools to keep pace with the technological advancement. The researcher set out to examine the benefits and challenges of employing mobile learning in these two schools with the aim to inform the implementation of mobile technologies in more schools in Oman. The total of 16 teachers and 237 students responded to questionnaires, and 7 teachers and three student focus groups (of 13 students) were involved in interviews to explore how mobile technologies are used in these two schools. The questionnaires indicated that 87.5% of the sample teachers considered mobile learning helpful for learning and teaching. The teachers believed that mobile learning could promote learning, help teaching, offer vast resources, motivate students and save lesson time. Moreover, interviews with the teachers showed that mobile learning could offer several benefits like immediacy, saving lesson time, supporting differentiation, opportunities to learn anywhere, showing understanding, and offering vast resources. Most of the sample were also facing technical and classroom management challenges when employing mobile technologies in their lessons. In the interviews, most teachers complained of the difficulty to control their classes when they had mobile devices, which distracted their attention and understanding. They reported that their students were distracted by games and they needed to be trained to use mobile technologies for educational purposes. Most teachers recommended that certain parameters or restrictions should be established in any mobile learning project that restrict the usage of mobile technologies to educational purposes. In addition, teachers also emphasised that students needed to be trained on the advantages and limitations of mobile technologies. Teachers were also recommending that pedagogical training for using mobile technologies should be considered when implementing mobile learning in schools. These findings reveal that although of the challenges of managing their classes, teachers believe that mobile learning has great potential for learning. These results imply that mobile learning can be effectively implemented in school in Oman if certain factors and restrictions are considered.

Keywords: effective implementation, challenges, mobile learning, opportunities

Procedia PDF Downloads 216
71 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling

Authors: Syed Masood Hussain

Abstract:

An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.

Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter

Procedia PDF Downloads 543
70 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 93
69 Trajectory Tracking of Fixed-Wing Unmanned Aerial Vehicle Using Fuzzy-Based Sliding Mode Controller

Authors: Feleke Tsegaye

Abstract:

The work in this thesis mainly focuses on trajectory tracking of fixed wing unmanned aerial vehicle (FWUAV) by using fuzzy based sliding mode controller(FSMC) for surveillance applications. Unmanned Aerial Vehicles (UAVs) are general-purpose aircraft built to fly autonomously. This technology is applied in a variety of sectors, including the military, to improve defense, surveillance, and logistics. The model of FWUAV is complex due to its high non-linearity and coupling effect. In this thesis, input decoupling is done through extracting the dominant inputs during the design of the controller and considering the remaining inputs as uncertainty. The proper and steady flight maneuvering of UAVs under uncertain and unstable circumstances is the most critical problem for researchers studying UAVs. A FSMC technique was suggested to tackle the complexity of FWUAV systems. The trajectory tracking control algorithm primarily uses the sliding-mode (SM) variable structure control method to address the system’s control issue. In the SM control, a fuzzy logic control(FLC) algorithm is utilized in place of the discontinuous phase of the SM controller to reduce the chattering impact. In the reaching and sliding stages of SM control, Lyapunov theory is used to assure finite-time convergence. A comparison between the conventional SM controller and the suggested controller is done in relation to the chattering effect as well as tracking performance. It is evident that the chattering is effectively reduced, the suggested controller provides a quick response with a minimum steady-state error, and the controller is robust in the face of unknown disturbances. The designed control strategy is simulated with the nonlinear model of FWUAV using the MATLAB® / Simulink® environments. The simulation result shows the suggested controller operates effectively, maintains an aircraft’s stability, and will hold the aircraft’s targeted flight path despite the presence of uncertainty and disturbances.

Keywords: fixed-wing UAVs, sliding mode controller, fuzzy logic controller, chattering, coupling effect, surveillance, finite-time convergence, Lyapunov theory, flight path

Procedia PDF Downloads 57
68 Assessing Local Authorities’ Interest in Addressing Urban Challenges through Nature Based Solutions in Romania

Authors: Athanasios A. Gavrilidis, Mihai R. Nita, Larissa N. Stoia, Diana A. Onose

Abstract:

Contemporary global environmental challenges must be primarily addressed at local levels. Cities are under continuous pressure as they must ensure high quality of life levels for their citizens and at the same time to adapt and address specific environmental issues. Innovative solutions using natural features or mimicking natural systems are endorsed by the scientific community as efficient approaches for both mitigating climate change effects and the decrease of environmental quality and for maintaining high standards of living for urban dwellers. The aim of this study was to assess whether Romanian cities’ authorities are considering nature-based innovation as solutions for their planning, management, and environmental issues. Data were gathered by applying 140 questionnaires to urban authorities throughout the country. The questionnaire was designed for assessinglocal policy makers’ perspective over the efficiency of nature-based innovations as a tool to address specific challenges. It also focused on extracting data about financing sources and challenges they must overcome for adopting nature-based approaches. The gather results from the municipalities participating in our study were statistically processed, and they revealed that Romanian city managers acknowledge the benefits of nature-based innovations, but investments in this sector are not on top of their priorities. More than 90% of the selected cities have agreed that in the last 10 years, their major concern was to expand the grey infrastructure (roads and public amenities) using traditional approaches. When asked how they would react if faced with different socio-economic and environmental challenges, local urban managers indicated investments nature-based solutions as a priority only in case of biodiversity loss and extreme weather, while for other 14 proposed scenarios, they would embrace the business-as-usual approach. Our study indicates that while new concepts of sustainable urban planning emerge within the scientific community, local authorities need more time to understand and implement them. Without the proper knowledge, personnel, policies, or dedicated budgets, local administrators will not embrace nature-based innovations as solutions for their challenges.

Keywords: nature based innovations, perception analysis, policy making, urban planning

Procedia PDF Downloads 174
67 Sustainable Valorization of Wine Production Waste: Unlocking the Potential of Grape Pomace and Lees in the Vinho Verde Region

Authors: Zlatina Genisheva, Pedro Ferreira-Santos, Margarida Soares, Cândida Vilarinho, Joana Carvalho

Abstract:

The wine industry produces significant quantities of waste, much of which remains underutilized as a potential raw material. Typically, this waste is either discarded in the fields or incinerated, leading to environmental concerns. By-products of wine production, like lees and grape pomace, are readily available at relatively low costs and hold promise as raw materials for biochemical conversion into valuable products. Reusing these waste materials is crucial, not only for reducing environmental impact but also for enhancing profitability. The Vinhos Verdes demarcated region, the largest wine-producing area in Portugal, has remained relatively stagnant over time. This project aims to offer an alternative income source for producers in the region while also expanding the limited existing research on this area. The main objective of this project is the study of the sustainable valorization of grape pomace and lees from the production of DOC Vinho Verde. Extraction tests were performed to obtain high-value compounds, targeting phenolic compounds from grape pomace and protein-rich extracts from lees. An environmentally friendly technique, microwave extraction, was used for this process. This method is not only efficient but also aligns with the principles of green chemistry, reducing the use of harmful solvents and minimizing energy consumption. The findings from this study have the potential to open new revenue streams for the region’s wine producers while promoting environmental sustainability. The optimal conditions for extracting proteins from lees involve the use of NaOH at 150ºC. Regardless of the solvent employed, the ideal temperature for obtaining extracts rich in polyphenol compounds and exhibiting strong antioxidant activity is also 150ºC. For grape pomace, extracts with a high concentration of polyphenols and significant antioxidant properties were obtained at 210ºC. However, the highest total tannin concentrations were achieved at 150ºC, while the maximum total flavonoid content was obtained at 170ºC.

Keywords: antioxidants, circular economy, polyphenol compounds, waste valorization

Procedia PDF Downloads 17
66 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case

Authors: Besma Khalfoun

Abstract:

In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.

Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition

Procedia PDF Downloads 11
65 High Performance Liquid Cooling Garment (LCG) Using ThermoCore

Authors: Venkat Kamavaram, Ravi Pare

Abstract:

Modern warfighters experience extreme environmental conditions in many of their operational and training activities. In temperatures exceeding 95°F, the body’s temperature regulation can no longer cool through convection and radiation. In this case, the only cooling mechanism is evaporation. However, evaporative cooling is often compromised by excessive humidity. Natural cooling mechanisms can be further compromised by clothing and protective gear, which trap hot air and moisture close to the body. Creating an efficient heat extraction apparel system that is also lightweight without hindering dexterity or mobility of personnel working in extreme temperatures is a difficult technical challenge and one that needs to be addressed to increase the probability for the future success of the US military. To address this challenge, Oceanit Laboratories, Inc. has developed and patented a Liquid Cooled Garment (LCG) more effective than any on the market today. Oceanit’s LCG is a form-fitting garment with a network of thermally conductive tubes that extracts body heat and can be worn under all authorized and chemical/biological protective clothing. Oceanit specifically designed and developed ThermoCore®, a thermally conductive polymer, for use in this apparel, optimizing the product for thermal conductivity, mechanical properties, manufacturability, and performance temperatures. Thermal Manikin tests were conducted in accordance with the ASTM test method, ASTM F2371, Standard Test Method for Measuring the Heat Removal Rate of Personal Cooling Systems Using a Sweating Heated Manikin, in an environmental chamber using a 20-zone sweating thermal manikin. Manikin test results have shown that Oceanit’s LCG provides significantly higher heat extraction under the same environmental conditions than the currently fielded Environmental Control Vest (ECV) while at the same time reducing the weight. Oceanit’s LCG vests performed nearly 30% better in extracting body heat while weighing 15% less than the ECV. There are NO cooling garments in the market that provide the same thermal extraction performance, form-factor, and reduced weight as Oceanit’s LCG. The two cooling garments that are commercially available and most commonly used are the Environmental Control Vest (ECV) and the Microclimate Cooling Garment (MCG).

Keywords: thermally conductive composite, tubing, garment design, form fitting vest, thermocore

Procedia PDF Downloads 114
64 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 314
63 Concepts of Creation and Destruction as Cognitive Instruments in World View Study

Authors: Perizat Balkhimbekova

Abstract:

Evolutionary changes in cognitive world view taking place in the last decades are followed by changes in perception of the key concepts which are related to the certain lingua-cultural sphere. Also, such concepts reflect the person’s attitude to essential processes in the sphere of concepts, e.g. the opposite operations like creation and destruction. These changes in people’s life and thinking are displayed in a language world view. In order to open the maintenance of mental structures and concepts we should use language means as observable results of people’s cognitive activity. Semantics of words, free phrases and idioms should be considered as an authoritative source of information concerning concepts. The regularized set of concepts in people consciousness forms the sphere of concepts. Cognitive linguistics widely discusses the sphere of concepts as its crucial category defining it as the field of knowledge which is made of concepts. It is considered that a sphere of concepts comprises the various types of association and forms conceptual fields. As a material for the given research, the data from Russian National Corpus and British National Corpus were used. In is necessary to point out that data provided by computational studies, are intrinsic and verifiable; so that we have used them in order to get the reliable results. The procedure of study was based on such techniques as extracting of the context containing concepts of creation|destruction from the Russian National Corpus (RNC), and British National Corpus (BNC); analyzing and interpreting of those context on the basis of cognitive approach; finding of correspondence between the given concepts in the Russian and English world view. The key problem of our study is to find the correspondence between the elements of world view represented by opposite concepts such as creation and destruction. Findings: The concept of "destruction" indicates a process which leads to full or partial destruction of an object. In other words, it is a loss of the object primary essence: structures, properties, distinctive signs and its initial integrity. The concept of "creation", on the contrary, comprises positive characteristics, represents the activity aimed at improvement of the certain object, at the creation of ideal models of the world. On the other hand, destruction is represented much more widely in RNC than creation (1254 cases of the first concept by comparison to 192 cases for the second one). Our hypothesis consists in the antinomy represented by the aforementioned concepts. Being opposite both in respect of semantics and pragmatics, and from the point of view of axiology, they are at the same time complementary and interrelated concepts.

Keywords: creation, destruction, concept, world view

Procedia PDF Downloads 345
62 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 302
61 Object Oriented Classification Based on Feature Extraction Approach for Change Detection in Coastal Ecosystem across Kochi Region

Authors: Mohit Modi, Rajiv Kumar, Manojraj Saxena, G. Ravi Shankar

Abstract:

Change detection of coastal ecosystem plays a vital role in monitoring and managing natural resources along the coastal regions. The present study mainly focuses on the decadal change in Kochi islands connecting the urban flatland areas and the coastal regions where sand deposits have taken place. With this, in view, the change detection has been monitored in the Kochi area to apprehend the urban growth and industrialization leading to decrease in the wetland ecosystem. The region lies between 76°11'19.134"E to 76°25'42.193"E and 9°52'35.719"N to 10°5'51.575"N in the south-western coast of India. The IRS LISS-IV satellite image has been processed using a rule-based algorithm to classify the LULC and to interpret the changes between 2005 & 2015. The approach takes two steps, i.e. extracting features as a single GIS vector layer using different parametric values and to dissolve them. The multi-resolution segmentation has been carried out on the scale ranging from 10-30. The different classes like aquaculture, agricultural land, built-up, wetlands etc. were extracted using parameters like NDVI, mean layer values, the texture-based feature with corresponding threshold values using a rule set algorithm. The objects obtained in the segmentation process were visualized to be overlaying the satellite image at a scale of 15. This layer was further segmented using the spectral difference segmentation rule between the objects. These individual class layers were dissolved in the basic segmented layer of the image and were interpreted in vector-based GIS programme to achieve higher accuracy. The result shows a rapid increase in an industrial area of 40% based on industrial area statistics of 2005. There is a decrease in wetlands area which has been converted into built-up. New roads have been constructed which are connecting the islands to urban areas as well as highways. The increase in coastal region has been visualized due to sand depositions. The outcome is well supported by quantitative assessments which will empower rich understanding of land use land cover change for appropriate policy intervention and further monitoring.

Keywords: land use land cover, multiresolution segmentation, NDVI, object based classification

Procedia PDF Downloads 183
60 Analysis of Brownfield Soil Contamination Using Local Government Planning Data

Authors: Emma E. Hellawell, Susan J. Hughes

Abstract:

BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.

Keywords: Brownfield development, contaminated land, local government planning data, site investigation

Procedia PDF Downloads 137
59 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 125
58 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 363
57 An EEG-Based Scale for Comatose Patients' Vigilance State

Authors: Bechir Hbibi, Lamine Mili

Abstract:

Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.

Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction

Procedia PDF Downloads 67
56 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management

Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang

Abstract:

A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.

Keywords: building information model, construction management, quantity takeoffs, virtual reality

Procedia PDF Downloads 132
55 Extraction of Scandium (Sc) from an Ore with Functionalized Nanoporous Silicon Adsorbent

Authors: Arezoo Rahmani, Rinez Thapa, Juha-Matti Aalto, Petri Turhanen, Jouko Vepsalainen, Vesa-PekkaLehto, Joakim Riikonen

Abstract:

Production of Scandium (Sc) is a complicated process because Sc is found only in low concentrations in ores and the concentration of Sc is very low compared with other metals. Therefore, utilization of typical extraction processes such as solvent extraction is problematic in scandium extraction. The Adsorption/desorption method can be used, but it is challenging to prepare materials, which have good selectivity, high adsorption capacity, and high stability. Therefore, efficient and environmentally friendly methods for Sc extraction are needed. In this study, the nanoporous composite material was developed for extracting Sc from an Sc ore. The nanoporous composite material offers several advantageous properties such as large surface area, high chemical and mechanical stability, fast diffusion of the metals in the material and possibility to construct a filter out of the material with good flow-through properties. The nanoporous silicon material was produced by first stabilizing the surfaces with a silicon carbide layer and then functionalizing the surface with bisphosphonates that act as metal chelators. The surface area and porosity of the material were characterized by N₂ adsorption and the morphology was studied by scanning electron microscopy (SEM). The bisphosphonate content of the material was studied by thermogravimetric analysis (TGA). The concentration of metal ions in the adsorption/desorption experiments was measured with inductively coupled plasma mass spectrometry (ICP-MS). The maximum capacity of the material was 25 µmol/g Sc at pH=1 and 45 µmol/g Sc at pH=3, obtained from adsorption isotherm. The selectivity of the material towards Sc in artificial solutions containing several metal ions was studied at pH one and pH 3. The result shows good selectivity of the nanoporous composite towards adsorption of Sc. Scandium was less efficiently adsorbed from solution leached from the ore of Sc because of excessive amounts of iron (Fe), aluminum (Al) and titanium (Ti) which disturbed the adsorption process. For example, the concentration of Fe was more than 4500 ppm, while the concentration of Sc was only three ppm, approximately 1500 times lower. Precipitation methods were developed to lower the concentration of the metals other than Sc. Optimal pH for precipitation was found to be pH 4. The concentration of Fe, Al and Ti were decreased by 99, 70, 99.6%, respectively, while the concentration of Sc decreased only 22%. Despite the large reduction in the concentration of other metals, more work is needed to further increase the relative concentration of Sc compared with other metals to efficiently extract it using the developed nanoporous composite material. Nevertheless, the developed material may provide an affordable, efficient and environmentally friendly method to extract Sc on a large scale.

Keywords: adsorption, nanoporous silicon, ore solution, scandium

Procedia PDF Downloads 146
54 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators

Authors: Fathi Abid, Bilel Kaffel

Abstract:

The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.

Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode

Procedia PDF Downloads 339
53 Durham Region: How to Achieve Zero Waste in a Municipal Setting

Authors: Mirka Januszkiewicz

Abstract:

The Regional Municipality of Durham is the upper level of a two-tier municipal and regional structure comprised of eight lower-tier municipalities. With a population of 655,000 in both urban and rural settings, the Region is approximately 2,537 square kilometers neighboring the City of Toronto, Ontario Canada to the east. The Region has been focused on diverting waste from disposal since the development of its Long Term Waste Management Strategy Plan for 2000-2020. With a 54 percent solid waste diversion rate, the focus now is on achieving 70 percent diversion on the path to zero waste using local waste management options whenever feasible. The Region has an Integrated Waste Management System that consists of a weekly curbside collection of recyclable printed paper and packaging and source separated organics; a seasonal collection of leaf and yard waste; a bi-weekly collection of residual garbage; and twice annual collection of intact, sealed household batteries. The Region also maintains three Waste Management Facilities for residential drop-off of household hazardous waste, polystyrene, construction and demolition debris and electronics. Special collection events are scheduled in the spring, summer and fall months for reusable items, household hazardous waste, and electronics. The Region is in the final commissioning stages of an energy from the waste facility for residual waste disposal that will recover energy from non-recyclable wastes. This facility is state of the art and is equipped for installation of carbon capture technology in the future. Despite all of these diversion programs and efforts, there is still room for improvement. Recent residential waste studies revealed that over 50% of the residual waste placed at the curb that is destined for incineration could be recycled. To move towards a zero waste community, the Region is looking to more advanced technologies for extracting the maximum recycling value from residential waste. Plans are underway to develop a pre-sort facility to remove organics and recyclables from the residual waste stream, including the growing multi-residential sector. Organics would then be treated anaerobically to generate biogas and fertilizer products for beneficial use within the Region. This project could increase the Region’s diversion rate beyond 70 percent and enhance the Region’s climate change mitigation goals. Zero waste is an ambitious goal in a changing regulatory and economic environment. Decision makers must be willing to consider new and emerging technologies and embrace change to succeed.

Keywords: municipal waste, residential, waste diversion, zero waste

Procedia PDF Downloads 219
52 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter

Authors: Zhu Xinxin, Wang Hui, Yang Kai

Abstract:

Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.

Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter

Procedia PDF Downloads 117
51 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 490
50 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: data analytics, smart cities, competitive advantage, internet of things

Procedia PDF Downloads 133
49 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 78
48 Unveiling Adorno’s Concern for Revolutionary Praxis and Its Enduring Significance: A Philosophical Analysis of His Writings on Sociology and Philosophy

Authors: Marie-Josee Lavallee

Abstract:

Adorno’s reputation as an abstract and pessimistic thinker who indulged in a critic of capitalist society and culture without bothering himself with opening prospects for change, and who has no interest in political activism, recently begun to be questioned. This paper, which has a twofold objective, will push revisionist readings a step further by putting forward the thesis that revolutionary praxis has been an enduring concern for Adorno, surfacing throughout his entire work. On the other hand, it will hold that his understanding of the relationships between theory and praxis, which will be explained by referring to Ernst Bloch’s distinction between the warm and cold currents of Marxism, can help to interpret the paralysis of revolutionary practice in our own time under a new light. Philosophy and its tasks have been an enduring topic of Adorno’s work from the 1930s to Negativ Dialektik. The writings in which he develops these ideas stand among his most obscure and abstract so that their strong ties to the political have remained mainly overlooked. Adorno’s undertaking of criticizing and ‘redeeming’ philosophy and metaphysics is inseparable from a care for retrieving the capacity to act in the world and to change it. Philosophical problems are immanent to sociological problems, and vice versa, he underlines in his Metaphysik. Begriff and Problem. The issue of truth cannot be severed from the contingent context of a given idea. As a critical undertaking extracting its contents from reality, which is what philosophy should be from Adorno's perspective, the latter has the potential to fully reveal the reification of the individual and consciousness resulting from capitalist economic and cultural domination, thus opening the way to resistance and revolutionary change. While this project, according to his usual method, is sketched mainly in negative terms, it also exhibits positive contours which depict a socialist society. Only in the latter could human suffering end, and mutilated individuals experiment with reconciliation in an authentic way. That Adorno’s continuous plea for philosophy’s self-critic and renewal hides an enduring concern for revolutionary praxis emerges clearly from a careful philosophical analysis of his writings on philosophy and a selection of his sociological work, coupled with references to his correspondences. This study points to the necessity of a serious re-evaluation of Adorno’s relationship to the political, which will impact on the interpretation of his whole oeuvre, is much needed. In the second place, Adorno's dialectical conception of theory and praxis is enlightening for our own time, since it suggests that we are experiencing a phase of creative latency rather an insurmountable impasse.

Keywords: Frankfurt school, philosophy and revolution, revolutionary praxis, Theodor W. Adorno

Procedia PDF Downloads 123
47 Analyzing Competitive Advantage of Internet of Things and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic hasnot only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of the normal design, construction, and operation of cities provides a unique opportunity to improve connection between people. Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the researchcontribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create a competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: internet of things, data analytics, smart cities, competitive advantage

Procedia PDF Downloads 94
46 Epigenetic and Archeology: A Quest to Re-Read Humanity

Authors: Salma A. Mahmoud

Abstract:

Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.

Keywords: epigenetics, archeology, bones, chromatin, methylome

Procedia PDF Downloads 108
45 Role of Community Youths in Conservation of Forests and Protected Areas of Bangladesh

Authors: Obaidul Fattah Tanvir, Zinat Ara Afroze

Abstract:

Community living adjacent to forests and Protected Areas, especially in South Asian countries, have a common practice in extracting resources for their living and livelihoods. This extraction of resources, because the way it is done, destroys the biophysical features of the area. Deforestation, wildlife poaching, illegal logging, unauthorized hill cutting etc. are some of the serious issues of concern for the sustainability of the natural resources that has a direct impact on environment and climate as a whole. To ensure community involvement in conservation initiatives of the state, community based forest management, commonly known as Comanagement, has been in practice in 6 South Asian countries. These are -India, Nepal, Sri Lanka, Pakistan, Bhutan and Bangladesh. Involving community in forestry management was initiated first in Bangladesh in 1979 and reached as an effective co-management approach through a several paradigm shifts. This idea of Comanagement has been institutionalized through a Government Order (GO) by the Ministry of Environment and Forests, Government of Bangladesh on November 23, 2009. This GO clearly defines the structure and functions of Co-management and its different bodies. Bangladesh Forest Department has been working in association with community to conserve and manage the Forests and Protected areas of Bangladesh following this legal document. Demographically young people constitute the largest segment of population in Bangladesh. This group, if properly sensitized, can produce valuable impacts on the conservation initiatives, both by community and government. This study traced the major factors that motivate community youths to work effectively with different tiers of comanagement organizations in conservation of forests and Protected Areas of Bangladesh. For the purpose of this study, 3 FGDs were conducted with 30 youths from the community living around the Protected Areas of Cox’s bazar, South East corner of Bangladesh, who are actively involved in Co-management organizations. KII were conducted with 5 key officials of Forest Department stationed at Cox’s Bazar. 2 FGDs were conducted with the representatives of 7 Co-management organizations working in Cox’s Bazar region and approaches of different community outreach activities conducted for forest conservation by 3 private organizations and Projects have been reviewed. Also secondary literatures were reviewed for the history and evolution of Co-management in Bangladesh and six South Asian countries. This study found that innovative community outreach activities that are financed by public and private sectors involving youths and community as a whole have played a pivotal role in conservation of forests and Protected Areas of the region. This approach can be replicated in other regions of Bangladesh as well as other countries of South Asia where Co-Management exists in practice.

Keywords: community, co-management, conservation, forests, protected areas, youth

Procedia PDF Downloads 281
44 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 100