Search results for: geometric feature
489 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 115488 Virtue, Truth, Freedom, And The History Of Philosophy
Authors: Ashley DelCorno
Abstract:
GEM Anscombe’s 1958 essay Modern Moral Philosophy and the tradition of virtue ethics that followed has given rise to the restoration (or, more plainly, the resurrection) of Aristotle as something of an authority figure. Alisdair MacIntyre and Martha Nussbaum are proponents, for example, not just of Aristotle’s relevancy but also of his apparent implicit authority. That said, it’s not clear that the schema imagined by virtue ethicists accurately describes moral life or that it does not inadvertently work to impoverish genuine decision-making. If the label ‘virtue’ is categorically denied to some groups (while arbitrarily afforded to others), it can only turn on itself, thus rendering ridiculous its own premise. Likewise, as an inescapable feature of virtue ethics, Aristotelean binaries like ‘virtue/vice’ and ‘voluntary/involuntary’ offer up false dichotomies that may seriously compromise an agent’s ability to conceptualize choices that are truly free and rooted in meaningful criteria. Here, this topic is analyzed through a feminist lens predicated on the known paradoxes of patriarchy. The work of feminist theorists Jacqui Alexander, Katharine Angel, Simone de Beauvoir, bell hooks, Audre Lorde, Imani Perry, and Amia Srinivasan serves as important guideposts, and the argument here is built from a key tenet of black feminist thought regarding scarcity and possibility. Above all, it’s clear that though the philosophical tradition of virtue ethics presents itself as recovering the place of agency in ethics, its premises possess crippling limitations toward the achievement of this goal. These include, most notably, virtue ethics’ binding analysis of history, as well as its axiomatic attachment to obligatory clauses, problematic reading-in of Aristotle and arbitrary commitment to predetermined and competitively patriarchal ideas of what counts as a virtue.Keywords: feminist history, the limits of utopic imagination, curatorial creation, truth, virtue, freedom
Procedia PDF Downloads 82487 The Suitability of Agile Practices in Healthcare Industry with Regard to Healthcare Regulations
Authors: Mahmood Alsaadi, Alexei Lisitsa
Abstract:
Nowadays, medical devices rely completely on software whether as whole software or as embedded software, therefore, the organization that develops medical device software can benefit from adopting agile practices. Using agile practices in healthcare software development industries would bring benefits such as producing a product of a high-quality with low cost and in short period. However, medical device software development companies faced challenges in adopting agile practices. These due to the gaps that exist between agile practices and the requirements of healthcare regulations such as documentation, traceability, and formality. This research paper will conduct a study to investigate the adoption rate of agile practice in medical device software development, and they will extract and outline the requirements of healthcare regulations such as Food and Drug Administration (FDA), Health Insurance Portability and Accountability Act (HIPAA), and Medical Device Directive (MDD) that affect directly or indirectly on software development life cycle. Moreover, this research paper will evaluate the suitability of using agile practices in healthcare industries by analyzing the most popular agile practices such as eXtream Programming (XP), Scrum, and Feature-Driven Development (FDD) from healthcare industry point of view and in comparison with the requirements of healthcare regulations. Finally, the authors propose an agile mixture model that consists of different practices from different agile methods. As result, the adoption rate of agile practices in healthcare industries still low and agile practices should enhance with regard to requirements of the healthcare regulations in order to be used in healthcare software development organizations. Therefore, the proposed agile mixture model may assist in minimizing the gaps existing between healthcare regulations and agile practices and increase the adoption rate in the healthcare industry. As this research paper part of the ongoing project, an evaluation of agile mixture model will be conducted in the near future.Keywords: adoption of agile, agile gaps, agile mixture model, agile practices, healthcare regulations
Procedia PDF Downloads 236486 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 267485 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 143484 Enhancing the Interpretation of Group-Level Diagnostic Results from Cognitive Diagnostic Assessment: Application of Quantile Regression and Cluster Analysis
Authors: Wenbo Du, Xiaomei Ma
Abstract:
With the empowerment of Cognitive Diagnostic Assessment (CDA), various domains of language testing and assessment have been investigated to dig out more diagnostic information. What is noticeable is that most of the extant empirical CDA-based research puts much emphasis on individual-level diagnostic purpose with very few concerned about learners’ group-level performance. Even though the personalized diagnostic feedback is the unique feature that differentiates CDA from other assessment tools, group-level diagnostic information cannot be overlooked in that it might be more practical in classroom setting. Additionally, the group-level diagnostic information obtained via current CDA always results in a “flat pattern”, that is, the mastery/non-mastery of all tested skills accounts for the two highest proportion. In that case, the outcome does not bring too much benefits than the original total score. To address these issues, the present study attempts to apply cluster analysis for group classification and quantile regression analysis to pinpoint learners’ performance at different proficiency levels (beginner, intermediate and advanced) thus to enhance the interpretation of the CDA results extracted from a group of EFL learners’ reading performance on a diagnostic reading test designed by PELDiaG research team from a key university in China. The results show that EM method in cluster analysis yield more appropriate classification results than that of CDA, and quantile regression analysis does picture more insightful characteristics of learners with different reading proficiencies. The findings are helpful and practical for instructors to refine EFL reading curriculum and instructional plan tailored based on the group classification results and quantile regression analysis. Meanwhile, these innovative statistical methods could also make up the deficiencies of CDA and push forward the development of language testing and assessment in the future.Keywords: cognitive diagnostic assessment, diagnostic feedback, EFL reading, quantile regression
Procedia PDF Downloads 146483 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions
Authors: Guneet Saini, Shahrukh, Sunil Sharma
Abstract:
Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.Keywords: operational performance, roundabout, simulation, VISSIM
Procedia PDF Downloads 139482 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 89481 Lexical Features and Motivations of Product Reviews on Selected Philippine Online Shops
Authors: Jimmylen Tonio, Ali Anudin, Rochelle Irene G. Lucas
Abstract:
Alongside the progress of electronic-business websites, consumers have become more comfortable with online shopping. It has become customary for consumers that prior to purchasing a product or availing services, they consult online reviews info as bases in evaluating and deciding whether or not they should push thru with their procurement of the product or service. Subsequently, after purchasing, consumers tend to post their own comments of the product in the same e-business websites. Because of this, product reviews (PRS) have become an indispensable feature in online businesses equally beneficial for both business owners and consumers. This study explored the linguistic features and motivations of online product reviews on selected Philippine online shops, LAZADA and SHOPEE. Specifically, it looked into the lexical features of the PRs, the factors that motivated consumers to write the product reviews, and the difference of lexical preferences between male and female when they write the reviews. The findings revealed the following: 1. Formality of words in online product reviews primarily involves non-standard spelling, followed by abbreviated word forms, colloquial contractions and use of coined/novel words; 2. Paralinguistic features in online product reviews are dominated by the use of emoticons, capital letters and punctuations followed by the use of pictures/photos and lastly, by paralinguistic expressions; 3. The factors that motivate consumers to write product reviews varied. Online product reviewers are predominantly driven by venting negative feelings motivation, followed by helping the company, helping other consumers, positive self-enhancement, advice seeking and lastly, by social benefits; and 4. Gender affects the word frequencies of product online reviews, while negation words, personal pronouns, the formality of words, and paralinguistic features utilized by both male and female online product reviewers are not different.Keywords: lexical choices, motivation, online shop, product reviews
Procedia PDF Downloads 151480 The Feasibility Evaluation Of The Compressed Air Energy Storage System In The Porous Media Reservoir
Authors: Ming-Hong Chen
Abstract:
In the study, the mechanical and financial feasibility for the compressed air energy storage (CAES) system in the porous media reservoir in Taiwan is evaluated. In 2035, Taiwan aims to install 16.7 GW of wind power and 40 GW of photovoltaic (PV) capacity. However, renewable energy sources often generate more electricity than needed, particularly during winter. Consequently, Taiwan requires long-term, large-scale energy storage systems to ensure the security and stability of its power grid. Currently, the primary large-scale energy storage options are Pumped Hydro Storage (PHS) and Compressed Air Energy Storage (CAES). Taiwan has not ventured into CAES-related technologies due to geological and cost constraints. However, with the imperative of achieving net-zero carbon emissions by 2050, there's a substantial need for the development of a considerable amount of renewable energy. PHS has matured, boasting an overall installed capacity of 4.68 GW. CAES, presenting a similar scale and power generation duration to PHS, is now under consideration. Taiwan's geological composition, being a porous medium unlike salt caves, introduces flow field resistance affecting gas injection and extraction. This study employs a program analysis model to establish the system performance analysis capabilities of CAES. The finite volume model is then used to assess the impact of porous media, and the findings are fed back into the system performance analysis for correction. Subsequently, the financial implications are calculated and compared with existing literature. For Taiwan, the strategic development of CAES technology is crucial, not only for meeting energy needs but also for decentralizing energy allocation, a feature of great significance in regions lacking alternative natural resources.Keywords: compressed-air energy storage, efficiency, porous media, financial feasibility
Procedia PDF Downloads 66479 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport
Authors: Surupa Shaw, Debjyoti Banerjee
Abstract:
Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement
Procedia PDF Downloads 318478 The Grievances Theory versus Transnationalism and the Cameroon Anglophone Question, 1961-2017
Authors: Nkatow Mafany Christian
Abstract:
No other period in human history has offered such great opportunities for grievances not only to last long but also to be manifested across international boundaries. This state of affairs is likely a common feature of the advent of social media. The Anglophone Question in Cameroon has been a problem of poor constitutional arrangements that can be traced to 1961 when the former French Cameroon reunified with former British Southern Cameroons following a plebiscite in which the latter overwhelmingly voted to reunify with the former. Though Southern/Anglophone Cameroons complained of perceived marginalization and an attempt by the majority French section to assimilate them, the manifestation was subtle and took place only through protests, petitions, strikes movements and demonstrations. However, with the advent of social media, a new cream of leaders emerged in the diaspora, including the US, Canada, Europe, Asia and the Middle East, to champion the manifestations leading to violence and conflicts that have bedeviled the region since 2017. The feeling of political subjugation, economic exploitation, social suppression and cultural assimilation among Anglophone Cameroonians united them under diaspora leaders against the government of Cameroon, calling for the creation of a separate state for Anglophones. This paper draws from this lead-up to analyze the current Anglophone Crisis in Cameroon in the light of the Grievance Theory and Transnationalism. The paper makes an appeal to field experience, interviews, official sources, documentation, and the internet to succor its central thesis. From the fertility of its sources, the paper submits that social media is a potent source of conflicts and makes nonsense of the principle of sovereignty and territorial integrity by its capacity to promote the transnational manifestation of grievances.Keywords: grievance, transnationalism, anglophone crisis, Cameroon, crisis and social media
Procedia PDF Downloads 63477 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending
Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim
Abstract:
Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions
Procedia PDF Downloads 104476 Water Quality Calculation and Management System
Authors: H. M. B. N Jayasinghe
Abstract:
The water is found almost everywhere on Earth. Water resources contain a lot of pollution. Some diseases can be spread through the water to the living beings. So to be clean water it should undergo a number of treatments necessary to make it drinkable. So it is must to have purification technology for the wastewater. So the waste water treatment plants act a major role in these issues. When considering the procedures taken after the water treatment process was always based on manual calculations and recordings. Water purification plants may interact with lots of manual processes. It means the process taking much time consuming. So the final evaluation and chemical, biological treatment process get delayed. So to prevent those types of drawbacks there are some computerized programmable calculation and analytical techniques going to be introduced to the laboratory staff. To solve this problem automated system will be a solution in which guarantees the rational selection. A decision support system is a way to model data and make quality decisions based upon it. It is widely used in the world for the various kind of process automation. Decision support systems that just collect data and organize it effectively are usually called passive models where they do not suggest a specific decision but only reveal information. This web base system is based on global positioning data adding facility with map location. Most worth feature is SMS and E-mail alert service to inform the appropriate person on a critical issue. The technological influence to the system is HTML, MySQL, PHP, and some other web developing technologies. Current issues in the computerized water chemistry analysis are not much deep in progress. For an example the swimming pool water quality calculator. The validity of the system has been verified by test running and comparison with an existing plant data. Automated system will make the life easier in productively and qualitatively.Keywords: automated system, wastewater, purification technology, map location
Procedia PDF Downloads 247475 Analysis of Solid Waste Management Practices and the Implications for Human Health and the Environment: A Case Study of Kayamandi Informal Settlement
Authors: Peter Iyobosa Asemota
Abstract:
This study on solid waste management practices addressed aspects of environmental and health impacts resulting from poor management of solid waste. The study was occasioned by the observed rate and volume of illegal and indiscriminate dumping of solid waste materials especially in informal settlements. The main focus of this study was to establish the impact of waste management practices on human health and the environment. The study, therefore, presents a critical analysis of the state of solid waste management in the study area and the implications for human health and the environment. The study was carried out in Kayamandi informal settlement within Stellenbosch municipality. The sustainable management of solid waste is very important in order to minimize the environmental and public health risks associated with improper solid waste management. There is no denying the fact that the problems of waste management will become critical as time goes on because of improper and inefficient waste management practices. Towns and cities exhibit the burdens of waste management which is a characteristics feature of most African cities. The study critically assess the implementation of waste management practices by the residents of the informal settlement; identify the factors affecting management issues in the operation of solid waste management system by the municipality; identify factors militating against the implementation of waste management policies and legislation. Furthermore, a waste assessment study was carried out to assess the generation; composition of the waste stream and also determine the attitudes and behavior of the residents with regard to waste management practices. Findings from the study revealed that Kayamandi is not different from other informal settlements with regards to waste management. People are of the opinion that solid waste management is the sole responsibility of municipal authorities and as such, the government should be responsible for bearing the cost of solid waste management.Keywords: environment, waste, waste composition, waste stream, policy, waste categories, sanitary landfill, waste collection, integrated solid waste management
Procedia PDF Downloads 695474 Spatio-Temporal Analysis of Drought in Cholistan Region, Pakistan: An Application of Standardized Precipitation Index
Authors: Qurratulain Safdar
Abstract:
Drought is a temporary aberration in contrast to aridity, as it is a permanent feature of climate. Virtually, it takes place in all types of climatic regions that range from high to low rainfall areas. Due to the wide latitudinal extent of Pakistan, there is seasonal and annual variability in rainfall. The south-central part of the country is arid and hyper-arid. This study focuses on the spatio-temporal analysis of droughts in arid and hyperarid region of Cholistan using the standardized precipitation index (SPI) approach. This study has assessed the extent of recurrences of drought and its temporal vulnerability to drought in Cholistan region. Initially, the paper described the geographic setup of the study area along with a brief description of the drought conditions that prevail in Pakistan. The study also provides a scientific foundation for preparing literature and theoretical framework in-line with the selected parameters and indicators. Data were collected both from primary and secondary data sources. Rainfall and temperature data were obtained from Pakistan Meteorology Department. By applying geostatistical approach, a standardized precipitation index (SPI) was calculated for the study region, and the value of spatio-temporal variability of drought and its severity was explored. As a result, in-depth spatial analysis of drought conditions in Cholistan area was found. Parallel to this, drought-prone areas with seasonal variation were also identified using Kriging spatial interpolation techniques in a GIS environment. The study revealed that there is temporal variation in droughts' occurrences both in time series and SPI values. The paper is finally concluded, and strategic plan was suggested to minimize the impacts of drought.Keywords: Cholistan desert, climate anomalies, metrological droughts, standardized precipitation index
Procedia PDF Downloads 213473 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification
Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine
Abstract:
Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.Keywords: convolution, feature extraction, image analysis, validation, precision agriculture
Procedia PDF Downloads 315472 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels
Authors: Foad Hassaninejadafarahani, Scott Ormiston
Abstract:
Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number
Procedia PDF Downloads 363471 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 377470 Changes in the Median Sacral Crest Associated with Sacrocaudal Fusion in the Greyhound
Authors: S. M. Ismail, H-H Yen, C. M. Murray, H. M. S. Davies
Abstract:
A recent study reported a 33% incidence of complete sacrocaudal fusion in greyhounds compared to a 3% incidence in other dogs. In the dog, the median sacral crest is formed by the fusion of sacral spinous processes. Separation of the 1st spinous process from the median crest of the sacrum in the dog has been reported as a diagnostic tool of type one lumbosacral transitional vertebra (LTV). LTV is a congenital spinal anomaly, which includes either sacralization of the caudal lumbar part or lumbarization of the most cranial sacral segment of the spine. In this study, the absence or reduction of fusion (presence of separation) between the 1st and 2ndspinous processes of the median sacral crest has been identified in association with sacrocaudal fusion in the greyhound, without any feature of LTV. In order to provide quantitative data on the absence or reduction of fusion in the median sacral crest between the 1st and 2nd sacral spinous processes, in association with sacrocaudal fusion. 204 dog sacrums free of any pathological changes (192 greyhound, 9 beagles and 3 labradors) were grouped based on the occurrence and types of fusion and the presence, absence, or reduction in the median sacral crest between the 1st and 2nd sacral spinous processes., Sacrums were described and classified as follows: F: Complete fusion (crest is present), N: Absence (fusion is absent), and R: Short crest (fusion reduced but not absent (reduction). The incidence of sacrocaudal fusion in the 204 sacrums: 57% of the sacrums were standard (3 vertebrae) and 43% were fused (4 vertebrae). Type of sacrum had a significant (p < .05) association with the absence and reduction of fusion between the 1st and 2nd sacral spinous processes of the median sacral crest. In the 108 greyhounds with standard sacrums (3 vertebrae) the percentages of F, N and R were 45% 23% and 23% respectively, while in the 84 fused (4 vertebrae) sacrums, the percentages of F, N and R were 3%, 87% and 10% respectively and these percentages were significantly different between standard (3 vertebrae) and fused (4 vertebrae) sacrums (p < .05). This indicates that absence of spinous process fusion in the median sacral crest was found in a large percentage of the greyhounds in this study and was found to be particularly prevalent in those with sacrocaudal fusion – therefore in this breed, at least, absence of sacral spinous process fusion may be unlikely to be associated with LTV.Keywords: greyhound, median sacral crest, sacrocaudal fusion, sacral spinous process
Procedia PDF Downloads 446469 Applying Laser Scanning and Digital Photogrammetry for Developing an Archaeological Model Structure for Old Castle in Germany
Authors: Bara' Al-Mistarehi
Abstract:
Documentation and assessment of conservation state of an archaeological structure is a significant procedure in any management plan. However, it has always been a challenge to apply this with a low coast and safe methodology. It is also a time-demanding procedure. Therefore, a low cost, efficient methodology for documenting the state of a structure is needed. In the scope of this research, this paper will employ digital photogrammetry and laser scanner to one of highly significant structures in Germany, The Old Castle (German: Altes Schloss). The site is well known for its unique features. However, the castle suffers from serious deterioration threats because of the environmental conditions and the absence of continuous monitoring, maintenance and repair plans. Digital photogrammetry is a generally accepted technique for the collection of 3D representations of the environment. For this reason, this image-based technique has been extensively used to produce high quality 3D models of heritage sites and historical buildings for documentation and presentation purposes. Additionally, terrestrial laser scanners are used, which directly measure 3D surface coordinates based on the run-time of reflected light pulses. These systems feature high data acquisition rates, good accuracy and high spatial data density. Despite the potential of each single approach, in this research work maximum benefit is to be expected by a combination of data from both digital cameras and terrestrial laser scanners. Within the paper, the usage, application and advantages of the technique will be investigated in terms of building high realistic 3D textured model for some parts of the old castle. The model will be used as diagnosing tool of the conservation state of the castle and monitoring mean for future changes.Keywords: Digital photogrammetry, Terrestrial laser scanners, 3D textured model, archaeological structure
Procedia PDF Downloads 178468 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy
Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos
Abstract:
Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree
Procedia PDF Downloads 156467 Analysis of Extracellular Vesicles Interactomes of two Isoforms of Tau Protein via SHSY-5Y Cell Lines
Authors: Mohammad Aladwan
Abstract:
Alzheimer’s disease (AD) is a widespread dementing illness with a complex and poorly understood etiology. An important role in improving our understanding of the AD process is the modeling of disease-associated changes in tau protein phosphorylation, a protein known to mediate events essential to the onset and progression of AD. A main feature of AD is the abnormal phosphorylation of tau protein and the presence of neurofibrillary tangles. In order to evaluate the respective roles of the microtubule-binding region (MTBR) and alternatively spliced exons in the N-terminal projection domains in AD, we have constructed SHSY-5Y cell lines that stably overexpress four different species of tau protein (4R2N, 4R0N, N(E-2), N(E+2)). Since the toxicity and spreading of tau lesions in AD depends on the interactions of tau with other proteins, we have performed a proteomic analysis of exosome-fraction interactomes for cell lysates and media samples that were isolated from SHSY-5Y cell lines. Functional analysis of tau interactomes based on gene ontology (GO) terms was performed using the String 10.5 database program. The highest number of exosomes proteomes and tau associated proteins were found with 4R2N isoform (2771 and 159) in cell lysate and they have a high strength of connectivity (78%) between proteins, while N(E-2) isoform in the media proteomes has the highest number of proteins and tau associated protein (1829 and 205). Moreover, known AD markers were significantly enriched in secreted interactomes relative to lysate interactomes in the SHSY-5Y cells of tau isoforms lacking exons 2 and 3 in the N-terminal. The lack of exon 2 (E-2) from tau protein can be mediated by tau secretion and spreading to different cells. Enriched functions in the secreted E-2 interactome include signaling and developmental pathways that have been linked to a) tau misprocessing and lesion development and b) tau secretion and which, therefore, could play novel roles in AD pathogenesis.Keywords: Alzheimer's disease, dementia, tau protein, neurodegenration disease
Procedia PDF Downloads 100466 A Novel Methodology for Browser Forensics to Retrieve Searched Keywords from Windows 10 Physical Memory Dump
Authors: Dija Sulekha
Abstract:
Nowadays, a good percentage of reported cybercrimes involve the usage of the Internet, directly or indirectly for committing the crime. Usually, Web Browsers leave traces of browsing activities on the host computer’s hard disk, which can be used by investigators to identify internet-based activities of the suspect. But criminals, who involve in some organized crimes, disable browser file generation feature to hide the evidence while doing illegal activities through the Internet. In such cases, even though browser files were not generated in the storage media of the system, traces of recent and ongoing activities were generated in the Physical Memory of the system. As a result, the analysis of Physical Memory Dump collected from the suspect's machine retrieves lots of forensically crucial information related to the browsing history of the Suspect. This information enables the cyber forensic investigators to concentrate on a few highly relevant selected artefacts while doing the Offline Forensics analysis of storage media. This paper addresses the reconstruction of web browsing activities by conducting live forensics to identify searched terms, downloaded files, visited sites, email headers, email ids, etc. from the physical memory dump collected from Windows 10 Systems. Well-known entry points are available for retrieving all the above artefacts except searched terms. The paper describes a novel methodology to retrieve the searched terms from Windows 10 Physical Memory. The searched terms retrieved in this way can be used for doing advanced file and keyword search in the storage media files reconstructed from the file system recovery in offline forensics.Keywords: browser forensics, digital forensics, live Forensics, physical memory forensics
Procedia PDF Downloads 116465 Analysis of Turkish Government Cultural Portal for Supporting Gastronomy Tourism
Authors: Hilmi Rafet Yüncü
Abstract:
Today Internet has very important role to promote products and services all over the world. Companies and destinations in tourism industry use Internet to sell and to promote their core products to directly potential tourists. Internet technologies have redefined the relationships between tourists, tourism companies, and travel agents. The new relationship allows for accessing and tapping tourism information and services. Internet technologies ensure new opportunities to available for the tourism industry, including travel accommodation, and tourist destination organizations. Websites are important devices to the marketing of a destination. Most people make a research about the destination before arriving via internet. Governments have a considerable role in the process of marketing tourism destinations. Governments make policies and regulations; furthermore, they help to market destinations to potential tourists. Governments have a comprehensive overview of the sector to see changes in tourism market and design better policies, programs and marketing plans. At the same time, governments support developing of alternative tourism in the country with regulations and marketing tools. The aim of this study is to analyse of an Internet website of governmental tourism portal in Turkey to determine effectiveness about gastronomy tourism. The Turkish government has established a culture portal for foreign and local tourists. The Portal provides local and general information about tourism attractions of cities and Turkey. There are 81 official cities in Turkey and all these cities are conducted to analyse to determine how effective marketing is done by Turkish Government in the manner of gastronomy tourism. A content analysis will be conducted to Internet website of the portal with food content, recipes and gastronomic feature of cities.Keywords: culture portal, gastronomy tourism, government, Turkey
Procedia PDF Downloads 343464 Insight into the Visual Attentional Correlates Underpinning Autistic-Like Traits in Fragile X and Down Syndrome
Authors: Jennifer M. Glennon, Hana D'Souza, Luke Mason, Annette Karmiloff-Smith, Michael S. C. Thomas
Abstract:
Genetic syndrome groups that feature high rates of autism comorbidity, like Down syndrome (DS) and fragile X syndrome (FXS), have been presented as useful models for understanding risk and protective factors involved in the emergence of autistic traits. Yet despite reaching clinical thresholds, these ‘syndromic’ forms of autism appear to differ in important ways from the idiopathic or ‘non-syndromic’ autism phenotype. To uncover the true nature of these comorbidities, it is necessary to extend definitions of autism to include the cognitive characteristics of the disorder and to then apply this broadened conceptualisation to the study of syndromic autism profiles. The current study employs a variety of well-established eye-tracking paradigms to assess visual attentional performance in children with DS and FXS who reach thresholds for autism on the Social Communication Questionnaire. It investigates whether autism profiles in these children are accompanied by visual orienting difficulties (‘sticky attention’), decreased social attention, and enhanced visual search performance, all of which are characteristic of the idiopathic autism phenotype. Data is collected from children with DS and FXS aged between 6 and 10 years, in addition to two control groups matched on age and intellectual ability (i.e., children with idiopathic autism and neurotypical controls). Cross-sectional developmental trajectory analyses are conducted to enable visuo-attentional profile comparisons. Significant differences in the visuo-attentional processes underpinning autism presentations in children with FXS and DS are hypothesised, supporting notions of syndrome specificity. The study provides insight into the complex heterogeneity associated with syndromic autism presentations and autism per se, with clinical implications for the utility of autism intervention programmes in DS and FXS populations.Keywords: autism, down syndrome, fragile X syndrome, eye tracking
Procedia PDF Downloads 239463 Stress and Rhythm in the Educated Nigerian Accent of English
Authors: Nkereke M. Essien
Abstract:
The intention of this paper is to examine stress in the Educated Nigerian Accent of English (ENAE) with the aim of analyzing stress and rhythmic patterns of Nigerian English. Our aim also is to isolate differences and similarities in the stress patterns studied and also know what forms the accent of these Educated Nigerian English (ENE) which marks them off from other groups or English’s of the world, to ascertain and characterize it and to provide documented evidence for its existence. Nigerian stress and rhythmic patterns are significantly different from the British English stress and rhythmic patterns consequently, the educated Nigerian English (ENE) features more stressed syllables than the native speakers’ varieties. The excessive stressed of syllables causes a contiguous “Ss” in the rhythmic flow of ENE, and this brings about a “jerky rhythm’ which distorts communication. To ascertain this claim, ten (10) Nigerian speakers who are educated in the English Language were selected by a stratified Random Sampling technique from two Federal Universities in Nigeria. This classification belongs to the education to the educated class or standard variety. Their performance was compared to that of a Briton (control). The Metrical system of analysis was used. The respondents were made to read some words and utterance which was recorded and analyzed perceptually, statistically and acoustically using the one-way Analysis of Variance (ANOVA). The Turky-Kramer Post Hoc test, the Wilcoxon Matched Pairs Signed Ranks test, and the Praat analysis software were used in the analysis. It was revealed from our findings that the Educated Nigerian English speakers feature more stressed syllables in their productions by spending more time in pronouncing stressed syllables and sometimes lesser time in pronouncing the unstressed syllables. Their overall tempo was faster. The ENE speakers used tone to mark prominence while the native speaker used stress to mark pronounce, typified by the control. We concluded that the stress pattern of the ENE speakers was significantly different from the native speaker’s variety represented by the control’s performance.Keywords: accent, Nigerian English, rhythm, stress
Procedia PDF Downloads 240462 Microbial Electrochemical Remediation System: Integrating Wastewater Treatment with Simultaneous Power Generation
Authors: Monika Sogani, Zainab Syed, Adrian C. Fisher
Abstract:
Pollution of estrogenic compounds has caught the attention of researchers as the slight increase of estrogens in the water bodies has a significant impact on the aquatic system. They belong to a class of endocrine disrupting compounds (EDCs) and are able to mimic hormones or interfere with the action of endogenous hormones. The microbial electrochemical remediation system (MERS) is employed here for exploiting an electrophototrophic bacterium for evaluating the capacity of biodegradation of ethinylestradiol hormone (EE2) under anaerobic conditions with power generation. MERS using electro-phototrophic bacterium offers a tailored solution of wastewater treatment in a developing country like India which has a huge solar potential. It is a clean energy generating technology as they require only sunlight, water, nutrients, and carbon dioxide to operate. Its main feature that makes it superior over other technologies is that the main fuel for this MERS is sunlight which is indefinitely present. When grown in light with organic compounds, these photosynthetic bacteria generate ATP by cyclic photophosphorylation and use carbon compounds to make cell biomass (photoheterotrophic growth). These cells showed EE2 degradation and were able to generate hydrogen as part of the process of nitrogen fixation. The two designs of MERS were studied, and a maximum of 88.45% decrease in EE2 was seen in a total period of 14 days in the better design. This research provides a better insight into microbial electricity generation and self-sustaining wastewater treatment facilities. Such new models of waste treatment aiming waste to energy generation needs to be followed and implemented for building a resource efficient and sustainable economy.Keywords: endocrine disrupting compounds, ethinylestradiol, microbial electrochemical remediation systems, wastewater treatment
Procedia PDF Downloads 118461 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110460 Factor Structure of the Korean Version of Multidimensional Experiential Avoidance Questionnaire (MEAQ)
Authors: Juyeon Lee, Sungeun You
Abstract:
Experiential avoidance is one’s tendency to avoid painful internal experience, unwanted adverse thoughts, emotions, and physical sensations. The Multidimensional Experiential Avoidance Questionnaire (MEAQ) is a measure of experiential avoidance, and the original scale consisted of 62 items with six subfactors including behavioral avoidance, distress aversion, procrastination, distraction/suppression, repression/denial, and distress endurance. The purpose of this study was to examine the factor structure of the MEAQ in a Korean sample. Three hundred community adults and university students aged 18 to 35 participated in an online survey assessing experiential avoidance (MEAQ and Acceptance and Action Questionnaire-II; AAQ-II), depression (Patient Health Questionnaire-9; PHQ-9), anxiety (Generalized Anxiety Disoder-7; GAD-7), negative affect (Positive and Negative Affect Scale; PANAS), neuroticism (Big Five Inventory; BFI), and quality of life (Satisfaction with Life Scale; SWLS). Factor analysis with principal axis with direct oblimin rotation was conducted to examine subfactors of the MEAQ. Results indicated that the six-factor structure of the original scale was adequate. Eight items out of 62 items were removed due to insufficient factor loading. These items included 3 items of behavior avoidance (e.g., “When I am hurting, I would do anything to feel better”), 2 items of repression/denial (e.g., “I work hard to keep out upsetting feelings”), and 3 items of distress aversion (e.g., “I prefer to stick to what I am comfortable with, rather than try new activities”). The MEAQ was positively associated with the AAQ-II (r = .47, p < .001), PHQ-9 (r = .37, p < .001), GAD-7 (r = .34, p < .001), PANAS (r = .35, p < .001), and neuroticism (r = .24, p < .001), and negatively correlated with the SWLS (r = -.38, p < .001). Internal consistency was good for the MEAQ total (Cronbach’s α = .90) as well as all six subfactors (Cronbach’s α = .83 to .87). The findings of the study support the multidimensional feature of experiential avoidance and validity of the MEAQ in a sample of Korean adults.Keywords: avoidance, experiential avoidance, factor structure, MEAQ
Procedia PDF Downloads 365