Search results for: internet of things based light system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 41038

Search results for: internet of things based light system

26428 Measuring Greenhouse Gas Exchange from Paddy Field Using Eddy Covariance Method in Mekong Delta, Vietnam

Authors: Vu H. N. Khue, Marian Pavelka, Georg Jocher, Jiří Dušek, Le T. Son, Bui T. An, Ho Q. Bang, Pham Q. Huong

Abstract:

Agriculture is an important economic sector of Vietnam, the most popular of which is wet rice cultivation. These activities are also known as the main contributor to the national greenhouse gas. In order to understand more about greenhouse gas exchange in these activities and to investigate the factors influencing carbon cycling and sequestration in these types of ecosystems, since 2019, the first eddy covariance station has been installed in a paddy field in Long An province, Mekong Delta. The station was equipped with state-of-the-art equipment for CO₂ and CH₄ gas exchange and micrometeorology measurements. In this study, data from the station was processed following the ICOS recommendations (Integrated Carbon Observation System) standards for CO₂, while CH₄ was manually processed and gap-filled using a random forest model from methane-gapfill-ml, a machine learning package, as there is no standard method for CH₄ flux gap-filling yet. Finally, the carbon equivalent (Ce) balance based on CO₂ and CH₄ fluxes was estimated. The results show that in 2020, even though a new water management practice - alternate wetting and drying - was applied to reduce methane emissions, the paddy field released 928 g Cₑ.m⁻².yr⁻¹, and in 2021, it was reduced to 707 g Cₑ.m⁻².yr⁻¹. On a provincial level, rice cultivation activities in Long An, with a total area of 498,293 ha, released 4.6 million tons of Cₑ in 2020 and 3.5 million tons of Cₑ in 2021.

Keywords: eddy covariance, greenhouse gas, methane, rice cultivation, Mekong Delta

Procedia PDF Downloads 121
26427 Design and Development of Herbal Formulations: Challenges and Solutions

Authors: B. Sathyanarayana

Abstract:

As per the report of World Health Organization, more than 80% of world population uses medicines made from herbal and natural materials. They have stood the test of time for their safety, efficacy, cultural acceptability and lesser side effects. Quality assurance and control measures, such as national quality specification and standards for herbal materials, good manufacturing practices (GMP) for herbal medicines, labelling, and licensing schemes for manufacturing, imports and marketing, should be in place in every country where herbal medicines are regulated. These measures are vital for ensuring the safety and efficacy of herbal medicines. In the case of herbal products challenge begins at the stage of designing itself except the classical products. Selection of herbal ingredients, officinal parts to be used, proportions are vital. Once the formulation is designed one should take utmost care to produce the standardized product of assured quality and safety. Quality control measures should cover the validation of quality and identity of raw materials, in process control (as per SOP and GMP norms) and at the level of final product. Quality testing, safety and efficacy studies of the final product are required to ensure the safe and effective use of the herbal products in human beings. Medicinal plants being the materials of natural resource are subjected to great variation making it really difficult to fix quality standards especially in the case of polyherbal preparations. Manufacturing also needs modification according to the type of ingredients present. Hence, it becomes essential to develop Standard operative Procedure for a specific herbal product. Present paper throws a light on the challenges that are encountered during the design and development of herbal products.

Keywords: herbal product, challenges, quality, safety, efficacy

Procedia PDF Downloads 487
26426 Impact of Curvatures in the Dike Line on Wave Run-up and Wave Overtopping, ConDike-Project

Authors: Malte Schilling, Mahmoud M. Rabah, Sven Liebisch

Abstract:

Wave run-up and overtopping are the relevant parameters for the dimensioning of the crest height of dikes. Various experimental as well as numerical studies have investigated these parameters under different boundary conditions (e.g. wave conditions, structure type). Particularly for the dike design in Europe, a common approach is formulated where wave and structure properties are parameterized. However, this approach assumes equal run-up heights and overtopping discharges along the longitudinal axis. However, convex dikes have a heterogeneous crest by definition. Hence, local differences in a convex dike line are expected to cause wave-structure interactions different to a straight dike. This study aims to assess both run-up and overtopping at convexly curved dikes. To cast light on the relevance of curved dikes for the design approach mentioned above, physical model tests were conducted in a 3D wave basin of the Ludwig-Franzius-Institute Hannover. A dike of a slope of 1:6 (height over length) was tested under both regular waves and TMA wave spectra. Significant wave heights ranged from 7 to 10 cm and peak periods from 1.06 to 1.79 s. Both run-up and overtopping was assessed behind the curved and straight sections of the dike. Both measurements were compared to a dike with a straight line. It was observed that convex curvatures in the longitudinal dike line cause a redirection of incident waves leading to a concentration around the center point. Measurements prove that both run-up heights and overtopping rates are higher than on the straight dike. It can be concluded that deviations from a straight longitudinal dike line have an impact on design parameters and imply uncertainties within the design approach in force. Therefore, it is recommended to consider these influencing factors for such cases.

Keywords: convex dike, longitudinal curvature, overtopping, run-up

Procedia PDF Downloads 282
26425 The Role of Evaluation for Effective and Efficient Change in Higher Education Institutions

Authors: Pattaka Sa-Ngimnet

Abstract:

That the University as we have known it is no longer serving the needs of the vast majority of students and potential students has been a topic of much discussion. Institutions of higher education, in this age of global culture, are in a process of metamorphosis. Technology is being used to allow more students, older students, working students and disabled students, who cannot attend conventional classes, to have greater access to higher education through the internet. But change must come about only after much evaluation and experimentation or education will simply become a commodity as, in some cases, it already has. This paper will be concerned with the meaning and methods of change and evaluation as they are applied to institutions of higher education. Organization’s generally have different goals and different approaches in order to be successful. However, the means of reaching those goals requires rational and effective planning. Any plans for successful change in any institution must take into account both effectiveness and efficiency and the differences between them. “Effectiveness” refers to an adequate means of achieving an objective. “Efficiency” refers to the ability to achieve an objective without waste of time or resources (The Free Dictionary). So an effective means may not be efficient and an efficient means may not be effective. The goal is to reach a synthesis of effectiveness and efficiency that will maximize both to the extent each is limited by the other. This focus of this paper then is to determine how an educational institution can become either successful or oppressive depending on the kinds of planning, evaluating and changes that operate by and on the administration. If the plan is concerned only with efficiency, the institution can easily become oppressive and lose sight of its purpose of educating students. If it is overly concentrated on effectiveness, the students may receive a superior education in the short run but the institution will face operating difficulties. In becoming only goal oriented, institutions also face problems. Simply stated, if the institution reaches its goals, the stake holders may become satisfied and fail to change and keep up with the needs of the times. So goals should be seen only as benchmarks in a process of becoming even better in providing quality education. Constant and consistent evaluation is the key to making all these factors come together in a successful process of planning, testing and changing the plans as needed. The focus of the evaluation has to be considered. Evaluations must take into account progress and needs of students, methods and skills of instructors, resources available from the institution and the styles and objectives of administrators. Thus the role of evaluation is pivotal in providing for the maximum of both effective and efficient change in higher education institutions.

Keywords: change, effectiveness, efficiency, education

Procedia PDF Downloads 307
26424 Gender Quotas in Italy: Effects on Corporate Performance

Authors: G. Bruno, A. Ciavarella, N. Linciano

Abstract:

The proportion of women in boardroom has traditionally been low around the world. Over the last decades, several jurisdictions opted for active intervention, which triggered a tangible progress in female representation. In Europe, many countries have implemented boardroom diversity policies in the form of legal quotas (Norway, Italy, France, Germany) or governance code amendments (United Kingdom, Finland). Policy actions rest, among other things, on the assumption that gender balanced boards result in improved corporate governance and performance. The investigation of the relationship between female boardroom representation and firm value is therefore key on policy grounds. The evidence gathered so far, however, has not produced conclusive results also because empirical studies on the impact of voluntary female board representation had to tackle with endogeneity, due to either differences in unobservable characteristics across firms that may affect their gender policies and governance choices, or potential reverse causality. In this paper, we study the relationship between the presence of female directors and corporate performance in Italy, where the Law 120/2011 envisaging mandatory quotas has introduced an exogenous shock in board composition which may enable to overcome reverse causality. Our sample comprises Italian firms listed on the Italian Stock Exchange and the members of their board of directors over the period 2008-2016. The study relies on two different databases, both drawn from CONSOB, referring respectively to directors and companies’ characteristics. On methodological grounds, information on directors is treated at the individual level, by matching each company with its directors every year. This allows identifying all time-invariant, possibly correlated, elements of latent heterogeneity that vary across firms and board members, such as the firm immaterial assets and the directors’ skills and commitment. Moreover, we estimate dynamic panel data specifications, so accommodating non-instantaneous adjustments of firm performance and gender diversity to institutional and economic changes. In all cases, robust inference is carried out taking into account the bidimensional clustering of observations over companies and over directors. The study shows the existence of a U-shaped impact of the percentage of women in the boardroom on profitability, as measured by Return On Equity (ROE) and Return On Assets. Female representation yields a positive impact when it exceeds a certain threshold, ranging between about 18% and 21% of the board members, depending on the specification. Given the average board size, i.e., around ten members over the time period considered, this would imply that a significant effect of gender diversity on corporate performance starts to emerge when at least two women hold a seat. This evidence supports the idea underpinning the critical mass theory, i.e., the hypothesis that women may influence.

Keywords: gender diversity, quotas, firms performance, corporate governance

Procedia PDF Downloads 159
26423 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 268
26422 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems

Authors: Zahid Ullah, Atlas Khan

Abstract:

This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.

Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms

Procedia PDF Downloads 96
26421 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver

Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera

Abstract:

In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.

Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids

Procedia PDF Downloads 404
26420 Artificial Neural Network Based Approach for Estimation of Individual Vehicle Speed under Mixed Traffic Condition

Authors: Subhadip Biswas, Shivendra Maurya, Satish Chandra, Indrajit Ghosh

Abstract:

Developing speed model is a challenging task particularly under mixed traffic condition where the traffic composition plays a significant role in determining vehicular speed. The present research has been conducted to model individual vehicular speed in the context of mixed traffic on an urban arterial. Traffic speed and volume data have been collected from three midblock arterial road sections in New Delhi. Using the field data, a volume based speed prediction model has been developed adopting the methodology of Artificial Neural Network (ANN). The model developed in this work is capable of estimating speed for individual vehicle category. Validation results show a great deal of agreement between the observed speeds and the predicted values by the model developed. Also, it has been observed that the ANN based model performs better compared to other existing models in terms of accuracy. Finally, the sensitivity analysis has been performed utilizing the model in order to examine the effects of traffic volume and its composition on individual speeds.

Keywords: speed model, artificial neural network, arterial, mixed traffic

Procedia PDF Downloads 374
26419 HKIE Accreditation: A Comparative Study on the Old and New Criteria

Authors: Peter P. K. Chiu

Abstract:

This paper reports a comparative study of new and old criteria for the professional accreditation of programme by the Hong Kong Institution of Engineers (HKIE). The major change in the criteria is the adoption of the outcome-based accreditation criteria and the use of measurement of attainment of outcomes which is very different from what academic did in the past. This has imposed a lot of difficulty for people in preparation for such exercise. Through this comparative study, the major difference between the two criteria is identified and a methodology is devised to help the academic to handle the issues due to the adoption of the new criteria. Thus it saves a lot of efforts.

Keywords: Hong Kong institution of engineers, outcome-based accreditation, Sydney accord, Washington accord

Procedia PDF Downloads 274
26418 Therapeutical Role of Copper Oxide Nanoparticles (CuO NPs) for Breast Cancer Therapy

Authors: Dipranjan Laha, Parimal Karmakar

Abstract:

Metal oxide nanoparticles are well known to generate oxidative stress and deregulate normal cellular activities. Among these, transition metals copper oxide nanoparticles (CuO NPs) are more compelling than others and able to modulate different cellular responses. In this work, we have synthesized and characterized CuO NPs by various biophysical methods. These CuO NPs (~30 nm) induce autophagy in human breast cancer cell line, MCF7 in a time and dose-dependent manner. Cellular autophagy was tested by MDC staining, induction of green fluorescent protein light chain 3 (GFP-LC3B) foci by confocal microscopy, transfection of pBABE-puro mCherry-EGFP-LC3B plasmid and western blotting of autophagy marker proteins LC3B, beclin1, and ATG5. Further, inhibition of autophagy by 3-Methyladenine (3-MA) decreased LD50 doses of CuO NPs. Such cell death was associated with the induction of apoptosis as revealed by FACS analysis, cleavage of PARP, dephosphorylation of Bad and increased cleavage product of caspase3. siRNA-mediated inhibition of autophagy-related gene beclin1 also demonstrated similar results. Finally, induction of apoptosis by 3-MA in CuO NPs treated cells were observed by TEM. This study indicates that CuO NPs are a potent inducer of autophagy which may be a cellular defense against the CuO NPs mediated toxicity and inhibition of autophagy switches the cellular response into apoptosis. A combination of CuO NPs with the autophagy inhibitor is essential to induce apoptosis in breast cancer cells. Acknowledgments: The authors would like to acknowledge for financial support for this research work to the Department of Biotechnology (No. BT/PR14661/NNT/28/494/2010), Government of India.

Keywords: nanoparticle, autophagy, apoptosis, siRNA-mediated inhibition

Procedia PDF Downloads 429
26417 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization

Procedia PDF Downloads 283
26416 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 114
26415 Forecasting Impacts on Vulnerable Shorelines: Vulnerability Assessment Along the Coastal Zone of Messologi Area - Western Greece

Authors: Evangelos Tsakalos, Maria Kazantzaki, Eleni Filippaki, Yannis Bassiakos

Abstract:

The coastal areas of the Mediterranean have been extensively affected by the transgressive event that followed the Last Glacial Maximum, with many studies conducted regarding the stratigraphic configuration of coastal sediments around the Mediterranean. The coastal zone of the Messologi area, western Greece, consists of low relief beaches containing low cliffs and eroded dunes, a fact which, in combination with the rising sea level and tectonic subsidence of the area, has led to substantial coastal. Coastal vulnerability assessment is a useful means of identifying areas of coastline that are vulnerable to impacts of climate change and coastal processes, highlighting potential problem areas. Commonly, coastal vulnerability assessment takes the form of an ‘index’ that quantifies the relative vulnerability along a coastline. Here we make use of the coastal vulnerability index (CVI) methodology by Thieler and Hammar-Klose, by considering geological features, coastal slope, relative sea-level change, shoreline erosion/accretion rates, and mean significant wave height as well as mean tide range to assess the present-day vulnerability of the coastal zone of Messologi area. In light of this, an impact assessment is performed under three different sea level rise scenarios, and adaptation measures to control climate change events are proposed. This study contributes toward coastal zone management practices in low-lying areas that have little data information, assisting decision-makers in adopting best adaptations options to overcome sea level rise impact on vulnerable areas similar to the coastal zone of Messologi.

Keywords: coastal vulnerability index, coastal erosion, sea level rise, GIS

Procedia PDF Downloads 164
26414 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units

Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani

Abstract:

There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.

Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation

Procedia PDF Downloads 392
26413 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies

Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi

Abstract:

Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.

Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)

Procedia PDF Downloads 193
26412 An Online 3D Modeling Method Based on a Lossless Compression Algorithm

Authors: Jiankang Wang, Hongyang Yu

Abstract:

This paper proposes a portable online 3D modeling method. The method first utilizes a depth camera to collect data and compresses the depth data using a frame-by-frame lossless data compression method. The color image is encoded using the H.264 encoding format. After the cloud obtains the color image and depth image, a 3D modeling method based on bundlefusion is used to complete the 3D modeling. The results of this study indicate that this method has the characteristics of portability, online, and high efficiency and has a wide range of application prospects.

Keywords: 3D reconstruction, bundlefusion, lossless compression, depth image

Procedia PDF Downloads 69
26411 Comparative Analysis of Political Parties and Political Behavior: The Trend for Democratic Principles

Authors: Mary Edokpa Fadal, Frances Agweda

Abstract:

Considering the volatile and evolving nature of the political environment in the developing countries, it is important that the subject of effective leadership practices that focus on transformational and systematic political development and values be reviewed. If the attitude towards partisan politics and the played politics by political parties is relatively deviated from expected adherence to acceptance, safe, efficient and practical standard, the political parties will continue to struggle endlessly in an effort to maintain a system that works. The analysis is situated in the context of political parties and partisan political behavior in contemporary societies and developing nations. Recent research of empirical evidence shows that most of the political parties are more or less, not too active in playing their instrumental role in the political system, such as unifying, simplifying and stabilizing the political process. This is however traced to the problem of ethnic politics that have been dominated by tribalism. The rising clamor for political development needs re-structuring and correcting the abnormalities in the center of the polity to address the flaws in our political system. The paper argues that political parties and political actors are some of the vital instrument of attaining societal goals of democratic principles for peace and durability. Issues of ethnic and partisan politics are also discussed, as it relates to question pertaining to political ideologies. It is in the findings that this paper examines some of the issues that have been seen revolving the true practice of political parties and its activities towards the democratic trend of a society, that help to resolve questions surrounding the issues of politics and governance in developing countries. These issues are seen as an aberration that have characterized politics and political behavior especially in the aspect of transparency and fulfilling its purpose of existence. The paper argues that the transition of the developing nature of states largely depends on the political structures and party politics and the nature of constitutionalism following the democratic awakening. The paper concludes that politics and political behavior are all human factors that play a vital role in the development of contemporary societies. They drive the wheel of nations towards its goal attainment. This paper relies on documentary, primary sources of data collection and empirical analysis.

Keywords: development, ethnicity, partisan politics, political behavior, political parties

Procedia PDF Downloads 207
26410 The Colombian Special Jurisdiction for Peace, a Transitional Justice Mechanism That Prioritizes Reconciliation over Punishment: A Content Analysis of the Colombian Peace Agreement

Authors: Laura Mendez

Abstract:

Tribunals for the prosecution of crimes against humanity have been implemented in recent history via international intervention or imposed by one side of the conflict, as in the cases of Rwanda, Iraq, Argentina, and Chile. However, the creation of a criminal tribunal as the result of a peace agreement between formerly warring parties has been unique to the Colombian peace process. As such, the Colombian Jurisdiction for Peace (SJP), or JEP for its Spanish acronym, is viewed as a site of social contestation where actors shape its design and implementation. This study contributes to the literature of transitional justice by analyzing how the framing of the creation of the Colombian tribunal reveals the parties' interests. The analysis frames the interests of the power-brokers, i.e., the government and the Revolutionary Armed Forces of Colombia (FARC), and the victims in light of the tribunal’s functions. The purpose of this analysis is to understand how the interests of the parties are embedded in the designing of the SJP. This paper argues that the creation of the SJP rests on restorative justice, for which the victim, not the perpetrator, is at the center of prosecution. The SJP’s approach to justice moves from prosecution as punishment to prosecution as sanctions. SJP’s alternative sanctions focused on truth, reparation, and restoration are designed to humanize both the victim and the perpetrator in order to achieve reconciliation. The findings also show that requiring the perpetrator to perform labor to repair the victim as an alternative form of sanction aims to foster relations of reintegration and social learning between victims and perpetrators.

Keywords: transitional justice mechanisms, criminal tribunals, Colombia, Colombian Jurisdiction for Peace, JEP

Procedia PDF Downloads 105
26409 Sustainable Composites for Aircraft Cabin Interior Applications

Authors: Fiorenzo Lenzi, Doris Abt, Besnik Bytyqi

Abstract:

Recent developments in composite materials for the interior cabin market provide more sustainable solutions for industrial applications. One contribution comes from epoxy-based prepregs recently developed to substitute phenolic prepregs in order to reduce the environmental impact of their production process and to eliminate health and safety issues related to their handling. Another example is the use of Mica-based products for improving the fire protection of interior cabin parts. Minerals, such as Mica, can be used as reinforcement in composites to reduce the heat release rate or, more traditionally, to improve the burn-through performance of fuselage and cargo lining components.

Keywords: prepreg, epoxy, Mica, battery protection

Procedia PDF Downloads 69
26408 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 220
26407 Aeroelastic Analysis of Engine Nacelle Strake Considering Geometric Nonlinear Behavior

Authors: N. Manoj

Abstract:

The aeroelastic behavior of engine nacelle strake when subjected to unsteady aerodynamic flows is investigated in this paper. Geometric nonlinear characteristics and modal parameters of nacelle strake are studied when it is under dynamic loading condition. Here, an N-S based Finite Volume solver is coupled with Finite Element (FE) based nonlinear structural solver to investigate the nonlinear characteristics of nacelle strake over a range of dynamic pressures at various phases of flight like takeoff, climb, and cruise conditions. The combination of high fidelity models for both aerodynamics and structural dynamics is used to predict the nonlinearities of strake (chine). The methodology adopted for present aeroelastic analysis is partitioned-based time domain coupled CFD and CSD solvers and it is validated by the consideration of experimental and numerical comparison of aeroelastic data for a cropped delta wing model which has a proven record. The present strake geometry is derived from theoretical formulation. The amplitude and frequency obtained from the coupled solver at various dynamic pressures is discussed, which gives a better understanding of its impact on aerodynamic design-sizing of strake.

Keywords: aeroelasticity, finite volume, geometric nonlinearity, limit cycle oscillations, strake

Procedia PDF Downloads 278
26406 An Android Geofencing App for Autonomous Remote Switch Control

Authors: Jamie Wong, Daisy Sang, Chang-Shyh Peng

Abstract:

Geofence is a virtual fence defined by a preset physical radius around a target location. Geofencing App provides location-based services which define the actionable operations upon the crossing of a geofence. Geofencing requires continual location tracking, which can consume noticeable amount of battery power. Additionally, location updates need to be frequent and accurate or order so that actions can be triggered within an expected time window after the mobile user navigate through the geofence. In this paper, we build an Android mobile geofencing Application to remotely and autonomously control a power switch.

Keywords: location based service, geofence, autonomous, remote switch

Procedia PDF Downloads 305
26405 Peripheral Inflammation and Neurodegeneration; A Potential for Therapeutic Intervention in Alzheimer’s Disease, Parkinson’s Disease, and Amyotrophic Lateral Sclerosis

Authors: Lourdes Hanna, Edward Poluyi, Chibuikem Ikwuegbuenyi, Eghosa Morgan, Grace Imaguezegie

Abstract:

Background: Degeneration of the central nervous system (CNS), also known as neurodegeneration, describes an age-associated progressive loss of the structure and function of neuronal materials, leading to functional and mental impairments. Main body: Neuroinflammation contributes to the continuous worsening of neurodegenerative states which are characterised by functional and mental impairments due to the progressive loss of the structure and function of neu-ronal materials. Some of the most common neurodegenerative diseases include Alzheimer’s disease (AD), Parkinson’s disease (PD) and amyotrophic lateral sclerosis (ALS). Whilst neuroinflammation is a key contributor to the progression of such disease states, it is not the single cause as there are multiple factors which contribute. Theoretically, non-steroidal anti-inflammatory drugs (NSAIDs) have potential to target neuroinflammation to reduce the severity of disease states. Whilst some animal models investigating the effects of NSAIDs on the risk of neurodegenerative diseases have shown a beneficial effect, this is not the same finding. Conclusion: Further investigation using more advanced research methods is required to better understand neuroinflammatory pathways and understand if there is still a potential window for NSAID efficacy.

Keywords: intervention, central nervous system, neurodegeneration, neuroinflammation

Procedia PDF Downloads 64
26404 Effect of Solvents in the Extraction and Stability of Anthocyanin from the Petals of Caesalpinia pulcherrima for Natural Dye-Sensitized Solar Cell

Authors: N. Prabavathy, R. Balasundaraprabhu, S. Shalini, Dhayalan Velauthapillai, S. Prasanna, N. Muthukumarasamy

Abstract:

Dye sensitized solar cell (DSSC) has become a significant research area due to their fundamental and scientific importance in the area of energy conversion. Synthetic dyes as sensitizer in DSSC are efficient and durable but they are costlier, toxic and have the tendency to degrade. Natural sensitizers contain plant pigments such as anthocyanin, carotenoid, flavonoid, and chlorophyll which promote light absorption as well as injection of charges to the conduction band of TiO2 through the sensitizer. But, the efficiency of natural dyes is not up to the mark mainly due to instability of the pigment such as anthocyanin. The stability issues in vitro are mainly due to the effect of solvents on extraction of anthocyanins and their respective pH. Taking this factor into consideration, in the present work, the anthocyanins were extracted from the flower Caesalpinia pulcherrima (C. pulcherrimma) with various solvents and their respective stability and pH values are discussed. The usage of citric acid as solvent to extract anthocyanin has shown good stability than other solvents. It also helps in enhancing the sensitization properties of anthocyanins with Titanium dioxide (TiO2) nanorods. The IPCE spectra show higher photovoltaic performance for dye sensitized TiO2nanorods using citric acid as solvent. The natural DSSC using citric acid as solvent shows a higher efficiency compared to other solvents. Hence citric acid performs to be a safe solvent for natural DSSC in boosting the photovoltaic performance and maintaining the stability of anthocyanins.

Keywords: Caesalpinia pulcherrima, citric acid, dye sensitized solar cells, TiO₂ nanorods

Procedia PDF Downloads 275
26403 Combustion Analysis of Suspended Sodium Droplet

Authors: T. Watanabe

Abstract:

Combustion analysis of suspended sodium droplet is performed by solving numerically the Navier-Stokes equations and the energy conservation equations. The combustion model consists of the pre-ignition and post-ignition models. The reaction rate for the pre-ignition model is based on the chemical kinetics, while that for the post-ignition model is based on the mass transfer rate of oxygen. The calculated droplet temperature is shown to be in good agreement with the existing experimental data. The temperature field in and around the droplet is obtained as well as the droplet shape variation, and the present numerical model is confirmed to be effective for the combustion analysis.

Keywords: analysis, combustion, droplet, sodium

Procedia PDF Downloads 199
26402 Photoelastic Analysis and Finite Elements Analysis of a Stress Field Developed in a Double Edge Notched Specimen

Authors: A. Bilek, M. Beldi, T. Cherfi, S. Djebali, S. Larbi

Abstract:

Finite elements analysis and photoelasticity are used to determine the stress field developed in a double edge notched specimen loaded in tension. The specimen is cut in a birefringent plate. Experimental isochromatic fringes are obtained with circularly polarized light on the analyzer of a regular polariscope. The fringes represent the loci of points of equal maximum shear stress. In order to obtain the stress values corresponding to the fringe orders recorded in the notched specimen, particularly in the neighborhood of the notches, a calibrating disc made of the same material is loaded in compression along its diameter in order to determine the photoelastic fringe value. This fringe value is also used in the finite elements solution in order to obtain the simulated photoelastic fringes, the isochromatics as well as the isoclinics. A color scale is used by the software to represent the simulated fringes on the whole model. The stress concentration factor can be readily obtained at the notches. Good agreements are obtained between the experimental and the simulated fringe patterns and between the graphs of the shear stress particularly in the neighborhood of the notches. The purpose in this paper is to show that one can obtain rapidly and accurately, by the finite element analysis, the isochromatic and the isoclinic fringe patterns in a stressed model as the experimental procedure can be time consuming. Stress fields can therefore be analyzed in three dimensional models as long as the meshing and the limit conditions are properly set in the program.

Keywords: isochromatic fringe, isoclinic fringe, photoelasticity, stress concentration factor

Procedia PDF Downloads 217
26401 On the Use of Machine Learning for Tamper Detection

Authors: Basel Halak, Christian Hall, Syed Abdul Father, Nelson Chow Wai Kit, Ruwaydah Widaad Raymode

Abstract:

The attack surface on computing devices is becoming very sophisticated, driven by the sheer increase of interconnected devices, reaching 50B in 2025, which makes it easier for adversaries to have direct access and perform well-known physical attacks. The impact of increased security vulnerability of electronic systems is exacerbated for devices that are part of the critical infrastructure or those used in military applications, where the likelihood of being targeted is very high. This continuously evolving landscape of security threats calls for a new generation of defense methods that are equally effective and adaptive. This paper proposes an intelligent defense mechanism to protect from physical tampering, it consists of a tamper detection system enhanced with machine learning capabilities, which allows it to recognize normal operating conditions, classify known physical attacks and identify new types of malicious behaviors. A prototype of the proposed system has been implemented, and its functionality has been successfully verified for two types of normal operating conditions and further four forms of physical attacks. In addition, a systematic threat modeling analysis and security validation was carried out, which indicated the proposed solution provides better protection against including information leakage, loss of data, and disruption of operation.

Keywords: anti-tamper, hardware, machine learning, physical security, embedded devices, ioT

Procedia PDF Downloads 138
26400 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability

Authors: Kim Anema, Matthias Max, Chris Zevenbergen

Abstract:

For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.

Keywords: community resilience, disaster response, social consequences, preparedness

Procedia PDF Downloads 339
26399 Controlling Interactions and Non-Equilibrium Steady State in Spinning Active Matter Monolayers

Authors: Joshua Paul Steimel, Michael Pappas, Ethan Hall

Abstract:

Particle-particle interactions are critical in determining the state of an active matter system. Unique and ubiquitous non-equilibrium behavior like swarming, vortexing, spiraling, and much more is governed by interactions between active units or particles. In hybrid active-passive matter systems, the attraction between spinning active units in a 2D monolayer of passive particles is controlled by the mechanical behavior of the passive monolayer. We demonstrate here that the range and dynamics of this attraction can be controlled by changing the composition of the passive monolayer by adding dopant passive particles. These dopant passive particles effectively pin the movement of dislocation motion in the passive media and reduce the probability of defect motion required to erode the bridge of passive particles between active spinners, thus reducing the range of attraction. Additionally, by adding an out of plane component to the magnetic moment and creating a top-like motion a short range repulsion emerges between the top-like particle. At inter-top distances less than four particle diameters apart, the tops repel but beyond that, distance attract up to 13 particle diameters apart. The tops were also able to locally and transiently anneal the passive monolayer. Thus we demonstrate that by tuning several parameters of the hybrid active matter system, one can observe very different emergent behavior.

Keywords: active matter, colloids, ferromagnetic, annealing

Procedia PDF Downloads 91