Search results for: efficient features selection
9481 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 4039480 Lessons from Vernacular Architecture for Lightweight Construction
Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi
Abstract:
With the gravity load reduction in the structural and non-structural components, the lightweight construction will be achieved as well as the improvement of efficiency and functional specifications. The advantages of lightweight construction can be examined in two levels. The first is the mass reduction of load bearing structure which results in increasing internal useful space and the other one is the mass reduction of building which decreases the effects of seismic load as a result. In order to achieve this goal, the essential building materials specifications and also optimum load bearing geometry of structural systems and elements have to be considered, so lightweight materials selection particularly with lightweight aggregate for building components will be the first step of lightweight construction. In the next step, in addition to selecting the prominent samples of Iran's traditional architecture, the process of these works improvement is analyzed through the viewpoints of structural efficiency and lightweighting and also the practical methods of lightweight construction have been extracted. The optimum design of load bearing geometry of structural system has to be considered not only in the structural system elements, but also in their composition and the selection of dimensions, proportions, forms and optimum orientations, can lead to get a maximum materials efficiency for loads and stresses bearing.Keywords: gravity load, light-weighting structural system, load bearing geometry, seismic behavior
Procedia PDF Downloads 5509479 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network
Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan
Abstract:
Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.Keywords: aggregation point, data communication, data aggregation, wireless sensor network
Procedia PDF Downloads 1659478 Trabecular Bone Radiograph Characterization Using Fractal, Multifractal Analysis and SVM Classifier
Authors: I. Slim, H. Akkari, A. Ben Abdallah, I. Bhouri, M. Hedi Bedoui
Abstract:
Osteoporosis is a common disease characterized by low bone mass and deterioration of micro-architectural bone tissue, which provokes an increased risk of fracture. This work treats the texture characterization of trabecular bone radiographs. The aim was to analyze according to clinical research a group of 174 subjects: 87 osteoporotic patients (OP) with various bone fracture types and 87 control cases (CC). To characterize osteoporosis, Fractal and MultiFractal (MF) methods were applied to images for features (attributes) extraction. In order to improve the results, a new method of MF spectrum based on the q-stucture function calculation was proposed and a combination of Fractal and MF attributes was used. The Support Vector Machines (SVM) was applied as a classifier to distinguish between OP patients and CC subjects. The features fusion (fractal and MF) allowed a good discrimination between the two groups with an accuracy rate of 96.22%.Keywords: fractal, micro-architecture analysis, multifractal, osteoporosis, SVM
Procedia PDF Downloads 3969477 Investigation of Surface Electromyograph Signal Acquired from the around Shoulder Muscles of Upper Limb Amputees
Authors: Amanpreet Kaur, Ravinder Agarwal, Amod Kumar
Abstract:
Surface electromyography is a strategy to measure the muscle activity of the skin. Sensors placed on the skin recognize the electrical current or signal generated by active muscles. A lot of the research has focussed on the detection of signal from upper limb amputee with activity of triceps and biceps muscles. The purpose of this study was to correlate phantom movement and sEMG activity in residual stump muscles of transhumeral amputee from the shoulder muscles. Eight non- amputee and seven right hand amputees were recruited for this study. sEMG data were collected for the trapezius, pectoralis and teres muscles for elevation, protraction and retraction of shoulder. Contrast between the amputees and non-amputees muscles action have been investigated. Subsequently, to investigate the impact of class separability for different motions of shoulder, analysis of variance for experimental recorded data was carried out. Results were analyzed to recognize different shoulder movements and represent a step towards the surface electromyography controlled system for amputees. Difference in F ratio (p < 0.05) values indicates the distinction in mean therefore these analysis helps to determine the independent motion. The identified signal would be used to design more accurate and efficient controllers for the upper-limb amputee for researchers.Keywords: around shoulder amputation, surface electromyography, analysis of variance, features
Procedia PDF Downloads 4389476 Chromatography Study of Fundamental Properties of Medical Radioisotope Astatine-211
Authors: Evgeny E. Tereshatov
Abstract:
Astatine-211 is considered one of the most promising radionuclides for Targeted Alpha Therapy. In order to develop reliable procedures to label biomolecules and utilize efficient delivery vehicle principles, one should understand the main chemical characteristics of astatine. The short half-life of 211At (~7.2 h) and absence of any stable isotopes of this element are limiting factors towards studying the behavior of astatine. Our team has developed a procedure for rapid and efficient isolation of astatine from irradiated bismuth material in nitric acid media based on 3-octanone and 1-octanol extraction chromatography resins. This process has been automated and it takes 20 min from the beginning of the target dissolution to the At-211 fraction elution. Our next step is to consider commercially available chromatography resins and their applicability in astatine purification in the same media. Results obtained along with the corresponding sorption mechanisms will be discussed.Keywords: astatine-211, chromatography, automation, mechanism, radiopharmaceuticals
Procedia PDF Downloads 979475 The Integration of Iranian Traditional Architecture in the Contemporary Housing Design: A Case Study
Authors: H. Nejadriahi
Abstract:
Traditional architecture is a valuable source of inspiration, which needs to be studied and integrated in the contemporary designs for achieving an identifiable contemporary architecture. Traditional architecture of Iran is among the distinguished examples of being contextually responsive, not only by considering the environmental conditions of a region, but also in terms of respecting the socio-cultural values of its context. In order to apply these valuable features to the current designs, they need to be adapted to today's condition, needs and desires. In this paper, the main features of the traditional architecture of Iran are explained to interrogate them in the formation of a contemporary house in Tehran, Iran. Also a table is provided to compare the utilization of the traditional design concepts in the traditional houses and the contemporary example of it. It is believed that such study would increase the awareness of contemporary designers by providing them some clues on maintaining the traditional values in the current design layouts particularly in the residential sector that would ultimately improve the quality of space in the contemporary architecture.Keywords: contemporary housing design, Iran, Tehran, traditional architecture
Procedia PDF Downloads 4769474 Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique
Authors: Jaturong Som-ard
Abstract:
The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.Keywords: flooding extent, Sentinel-1A data, JOSM desktop, damaged buildings
Procedia PDF Downloads 1969473 Using Blockchain Technology to Promote Sustainable Supply Chains: A Survey of Previous Studies
Authors: Saleh Abu Hashanah, Abirami Radhakrishnan, Dessa David
Abstract:
Sustainable practices in the supply chain have been an area of focus that require consideration of environmental, economic, and social sustainability practices. This paper aims to examine the use of blockchain as a disruptive technology to promote sustainable supply chains. Content analysis was used to analyze the uses of blockchain technology in sustainable supply chains. The results showed that blockchain technology features such as traceability, transparency, smart contracts, accountability, trust, immutability, anti-fraud, and decentralization promote sustainable supply chains. It is found that these features have impacted organizational efficiency in operations, transportation, and production, minimizing costs and reducing carbon emissions. In addition, blockchain technology has been found to elicit customer trust in the products.Keywords: blockchain technology, sustainability, supply chains, economic sustainability, environmental sustainability, social sustainability
Procedia PDF Downloads 1119472 The Sociology of the Facebook: An Exploratory Study
Authors: Liana Melissa E. de la Rosa, Jayson P. Ada
Abstract:
This exploratory study was conducted to determine the sociology of the Facebook. Specifically, it aimed to know the socio-demographic profile of the respondents in terms of age, sex, year level and monthly allowance; find out the common usage of Facebook to the respondents; identify the features of Facebook that are commonly used by the respondents; understand the benefits and risks of using the Facebook; determine how frequent the respondents use the Facebook; and find out if there is a significant relationship between socio-demographic profile of the respondents and their Facebook usage. This study used the exploratory research design and correlational design employing research survey questionnaire as its main data gathering instrument. Students of the University of Eastern Philippines were selected as the respondents of this study through quota sampling. Ten (10) students were randomly selected from each college of the university. Based on the findings of this study, the following conclusion were drawn: The majority of the respondents are aged 18 and 21 old, female, are third year students, and have monthly allowance of P 2,000 above. On the respondents’ usage of Facebook, the majority of use the Facebook on a daily basis for one to two (1-2) hours everyday. And most users used Facebook by renting a computer in an internet cafe. On the use of Facebook, most users have created their profiles mainly to connect with people and gain new friends. The most commonly used features of Facebook, are: photos application, like button, wall, notification, friend, chat, network, groups and “like” pages status updates, messages and inbox and events. While the other Facebook features that are seldom used by the respondents are games, news feed, user name, video sharing and notes. And the least used Facebook features are questions, poke feature, credits and the market place. The respondents stated that the major benefit that the Facebook has given to its users is its ability to keep in touch with family members or friends while the main risk identified is that the users can become addicted to the Internet. On the tests of relationships between the respondents’ use of Facebook and the four (4) socio-demographic profile variables: age, sex, year level, and month allowance, were found to be not significantly related to the respondents’ use of the Facebook. While the variable found to be significantly related was gender.Keywords: Facebook, sociology, social networking, exploratory study
Procedia PDF Downloads 2949471 Forecasting Future Demand for Energy Efficient Vehicles: A Review of Methodological Approaches
Authors: Dimitrios I. Tselentis, Simon P. Washington
Abstract:
Considerable literature has been focused over the last few decades on forecasting the consumer demand of Energy Efficient Vehicles (EEVs). These methodological issues range from how to capture recent purchase decisions in revealed choice studies and how to set up experiments in stated preference (SP) studies, and choice of analysis method for analyzing such data. This paper reviews the plethora of published studies on the field of forecasting demand of EEVs since 1980, and provides a review and annotated bibliography of that literature as it pertains to this particular demand forecasting problem. This detailed review addresses the literature not only to Transportation studies, but specifically to the problem and methodologies around forecasting to the time horizons of planning studies which may represent 10 to 20 year forecasts. The objectives of the paper are to identify where existing gaps in literature exist and to articulate where promising methodologies might guide longer term forecasting. One of the key findings of this review is that there are many common techniques used both in the field of new product demand forecasting and the field of predicting future demand for EEV. Apart from SP and RP methods, some of these new techniques that have emerged in the literature in the last few decades are survey related approaches, product diffusion models, time-series modelling, computational intelligence models and other holistic approaches.Keywords: demand forecasting, Energy Efficient Vehicles (EEVs), forecasting methodologies review, methodological approaches
Procedia PDF Downloads 4929470 Narrative Identity Predicts Borderline Personality Disorder Features in Inpatient Adolescents up to Six Months after Admission
Authors: Majse Lind, Carla Sharp, Salome Vanwoerden
Abstract:
Narrative identity is the dynamic and evolving story individuals create about their personal pasts, presents, and presumed futures. This storied sense of self develops in adolescence and is crucial for fostering a sense of self-unity and purpose in life. A growing body of work has shown that several characteristics of narrative identity are disturbed in adults suffering from borderline personality disorder (BPD). Very little research, however, has explored the stories told by adolescents with BPD features. Investigating narrative identity early in the lifespan and in relation to personality pathology is crucial; BPD is a developmental disorder with early signs appearing already in adolescence. In the current study, we examine narrative identity (focusing on themes of agency and communion) coded from self-defining memories derived from the child attachment interview in 174 inpatient adolescents (M = 15.12, SD = 1.52) at the time of admission. The adolescents’ social cognition was further assessed on the basis of their reactions to movie scenes (i.e., the MASC movie task). They also completed a trauma checklist and self-reported BPD features at three different time points (i.e., at admission, at discharge, and 6 months after admission). Preliminary results show that adolescents who told stories containing themes of agency and communion evinced better social cognition, and lower emotional abuse on the trauma checklist. In addition, adolescents who disclosed stories containing lower levels of agency and communion demonstrated more BPD symptoms at all three time points, even when controlling for the occurrence of traumatic life events. Surprisingly, social cognitive abilities were not significantly associated with BPD features. These preliminary results underscore the importance of narrative identity as an indicator, and potential cause, of incipient personality pathology. Thus, focusing on diminished themes of narrative-based agency and communion in early adolescence could be crucial in preventing the development of personality pathology over time.Keywords: borderline personality disorder, inpatient adolescents, narrative identity, follow-ups
Procedia PDF Downloads 1599469 Day/Night Detector for Vehicle Tracking in Traffic Monitoring Systems
Authors: M. Taha, Hala H. Zayed, T. Nazmy, M. Khalifa
Abstract:
Recently, traffic monitoring has attracted the attention of computer vision researchers. Many algorithms have been developed to detect and track moving vehicles. In fact, vehicle tracking in daytime and in nighttime cannot be approached with the same techniques, due to the extreme different illumination conditions. Consequently, traffic-monitoring systems are in need of having a component to differentiate between daytime and nighttime scenes. In this paper, a HSV-based day/night detector is proposed for traffic monitoring scenes. The detector employs the hue-histogram and the value-histogram on the top half of the image frame. Experimental results show that the extraction of the brightness features along with the color features within the top region of the image is effective for classifying traffic scenes. In addition, the detector achieves high precision and recall rates along with it is feasible for real time applications.Keywords: day/night detector, daytime/nighttime classification, image classification, vehicle tracking, traffic monitoring
Procedia PDF Downloads 5619468 Framing the Dynamics and Functioning of Different Variants of Terrorist Organizations: A Business Model Perspective
Authors: Eisa Younes Alblooshi
Abstract:
Counterterrorism strategies, to be effective and efficient, require a sound understanding of the dynamics, the interlinked organizational elements of the terrorist outfits being combated, with a view to having cognizance of their strong points to be guarded against, as well as the vulnerable zones that can be targeted for optimal results in a timely fashion by counterterrorism agencies. A unique model regarding the organizational imperatives was evolved in this research through likening the terrorist organizations with the traditional commercial ones, with a view to understanding in detail the dynamics of interconnectivity and dependencies, and the related compulsions facing the leaderships of such outfits that provide counterterrorism agencies with opportunities for forging better strategies. It involved assessing the evolving organizational dynamics and imperatives of different types of terrorist organizations, to enable the researcher to construct a prototype model that defines the progression and linkages of the related organizational elements of such organizations. It required detailed analysis of how the various elements are connected, with sequencing identified, as any outfit positions itself with respect to its external environment and internal dynamics. A case study focusing on a transnational radical religious state-sponsored terrorist organization was conducted to validate the research findings and to further strengthen the specific counterterrorism strategies. Six different variants of the business model of terrorist organizations were identified, categorized based on their outreach, mission, and status of any state sponsorship. The variants represent vast majority of the range of terrorist organizations acting locally or globally. The model shows the progression and dynamics of these organizations through various dimensions including mission, leadership, outreach, state sponsorship status, resulting in the organizational structure, state of autonomy, preference divergence in its fold, recruitment core, propagation avenues, down to their capacity to adapt, resulting critically in their own life cycles. A major advantage of the model is the utility of mapping terrorist organizations according to their fits to the sundry identified variants, allowing for flexibility and differences within, enabling the researchers and counterterrorism agencies to observe a neat blueprint of the organization’s footprint, along with highlighting the areas to be evaluated for focused target zone selection and timing of counterterrorism interventions. Special consideration is given to the dimension of financing, keeping in context the latest developments regarding cryptocurrencies, hawala, and global anti-money laundering initiatives. Specific counterterrorism strategies and intervention points have been identified for each of the respective model variants, with a view to efficient and effective deployment of resources.Keywords: terrorism, counterterrorism, model, strategy
Procedia PDF Downloads 1599467 An Analysis of New Service Interchange Designs
Authors: Joseph E. Hummer
Abstract:
An efficient freeway system will be essential to the development of Africa, and interchanges are a key to that efficiency. Around the world, many interchanges between freeways and surface streets, called service interchanges, are of the diamond configuration, and interchanges using roundabouts or loop ramps are also popular. However, many diamond interchanges have serious operational problems, interchanges with roundabouts fail at high demand levels, and loops use lots of expensive land. Newer service interchange designs provide other options. The most popular new interchange design in the US at the moment is the double crossover diamond (DCD), also known as the diverging diamond. The DCD has enormous potential, but also has several significant limitations. The objectives of this paper are to review new service interchange options and to highlight some of the main features of those alternatives. The paper tests four conventional and seven unconventional designs using seven measures related to efficiency, cost, and safety. The results show that there is no superior design in all measures investigated. The DCD is better than most designs tested on most measures examined. However, the DCD was only superior to all other designs for bridge width. The DCD performed relatively poorly for capacity and for serving pedestrians. Based on the results, African freeway designers are encouraged to investigate the full range of alternatives that could work at the spot of interest. Diamonds and DCDs have their niches, but some of the other designs investigated could be optimum at some spots.Keywords: interchange, diamond, diverging diamond, capacity, safety, cost
Procedia PDF Downloads 2819466 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 1059465 A Comprehensive Review of Adaptive Building Energy Management Systems Based on Users’ Feedback
Authors: P. Nafisi Poor, P. Javid
Abstract:
Over the past few years, the idea of adaptive buildings and specifically, adaptive building energy management systems (ABEMS) has become popular. Well-performed management in terms of energy is to create a balance between energy consumption and user comfort; therefore, in new energy management models, efficient energy consumption is not the sole factor and the user's comfortability is also considered in the calculations. One of the main ways of measuring this factor is by analyzing user feedback on the conditions to understand whether they are satisfied with conditions or not. This paper provides a comprehensive review of recent approaches towards energy management systems based on users' feedbacks and subsequently performs a comparison between them premised upon their efficiency and accuracy to understand which approaches were more accurate and which ones resulted in a more efficient way of minimizing energy consumption while maintaining users' comfortability. It was concluded that the highest accuracy rate among the presented works was 95% accuracy in determining satisfaction and up to 51.08% energy savings can be achieved without disturbing user’s comfort. Considering the growing interest in designing and developing adaptive buildings, these studies can support diverse inquiries about this subject and can be used as a resource to support studies and researches towards efficient energy consumption while maintaining the comfortability of users.Keywords: adaptive buildings, energy efficiency, intelligent buildings, user comfortability
Procedia PDF Downloads 1379464 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement
Authors: Chao Xu
Abstract:
Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis
Procedia PDF Downloads 3569463 Analysis and Prediction of COVID-19 by Using Recurrent LSTM Neural Network Model in Machine Learning
Authors: Grienggrai Rajchakit
Abstract:
As we all know that coronavirus is announced as a pandemic in the world by WHO. It is speeded all over the world with few days of time. To control this spreading, every citizen maintains social distance and self-preventive measures are the best strategies. As of now, many researchers and scientists are continuing their research in finding out the exact vaccine. The machine learning model finds that the coronavirus disease behaves in an exponential manner. To abolish the consequence of this pandemic, an efficient step should be taken to analyze this disease. In this paper, a recurrent neural network model is chosen to predict the number of active cases in a particular state. To make this prediction of active cases, we need a database. The database of COVID-19 is downloaded from the KAGGLE website and is analyzed by applying a recurrent LSTM neural network with univariant features to predict the number of active cases of patients suffering from the corona virus. The downloaded database is divided into training and testing the chosen neural network model. The model is trained with the training data set and tested with a testing dataset to predict the number of active cases in a particular state; here, we have concentrated on Andhra Pradesh state.Keywords: COVID-19, coronavirus, KAGGLE, LSTM neural network, machine learning
Procedia PDF Downloads 1669462 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 789461 Efficient DCT Architectures
Authors: Mr. P. Suryaprasad, R. Lalitha
Abstract:
This paper presents an efficient area and delay architectures for the implementation of one dimensional and two dimensional discrete cosine transform (DCT). These are supported to different lengths (4, 8, 16, and 32). DCT blocks are used in the different video coding standards for the image compression. The 2D- DCT calculation is made using the 2D-DCT separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Based on the existing 1D-DCT architecture two different types of 2D-DCT architectures, folded and parallel types are implemented. Both of these two structures use the same transpose buffer. Proposed transpose buffer occupies less area and high speed than existing transpose buffer. Hence the area, low power and delay of both the 2D-DCT architectures are reduced.Keywords: transposition buffer, video compression, discrete cosine transform, high efficiency video coding, two dimensional picture
Procedia PDF Downloads 5269460 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem
Authors: Kalpana Dahiya
Abstract:
This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization
Procedia PDF Downloads 1669459 Experimental Research on the Effect of Activating Temperature on Combustion and Nox Emission Characteristics of Pulverized Coal in a Novel Purification-combustion Reaction System
Authors: Ziqu Ouyang, Kun Su
Abstract:
A novel efficient and clean coal combustion system, namely the purification-combustion system, was designed by the Institute of Engineering Thermal Physics, Chinese Academy of Science, in 2022. Among them, the purification system was composed of a mesothermal activating unit and a hyperthermal reductive unit, and the combustion system was composed of a mild combustion system. In the purification-combustion system, the deep in-situ removal of coal-N could be realized by matching the temperature and atmosphere in each unit, and thus the NOx emission was controlled effectively. To acquire the methods for realizing the efficient and clean coal combustion, this study investigated the effect of the activating temperature (including 822 °C, 858 °C, 933 °C, 991 °C), which was the key factor affecting the system operation, on combustion and NOx emission characteristics of pulverized coal in a 30 kW purification-combustion test bench. The research result turned out that the activating temperature affected the combustion and NOx emission characteristics significantly. As the activating temperature increased, the temperature increased first and then decreased in the mild combustion unit, and the temperature change in the lower part was much higher than that in the upper part. Moreover, the main combustion region was always located at the top of the unit under different activating temperatures, and the combustion intensity along the unit was weakened gradually. Increasing the activating temperature excessively could destroy the reductive atmosphere early in the upper part of the unit, which wasn’t conducive to the full removal of coal-N in the reductive coal char. As the activating temperature increased, the combustion efficiency increased first and then decreased, while the NOx emission decreased first and then increased, illustrating that increasing the activating temperature properly promoted the efficient and clean coal combustion, but there was a limit to its growth. In this study, the optimal activating temperature was 858 °C. Hence, this research illustrated that increasing the activating temperature properly could realize the mutual matching of improving the combustion efficiency and reducing the NOx emission, and thus guaranteed the clean and efficient coal combustion well.Keywords: activating temperature, combustion characteristics, nox emission, purification-combustion system
Procedia PDF Downloads 969458 Brittle Fracture Tests on Steel Bridge Bearings: Application of the Potential Drop Method
Authors: Natalie Hoyer
Abstract:
Usually, steel structures are designed for the upper region of the steel toughness-temperature curve. To address the reduced toughness properties in the temperature transition range, additional safety assessments based on fracture mechanics are necessary. These assessments enable the appropriate selection of steel materials to prevent brittle fracture. In this context, recommendations were established in 2011 to regulate the appropriate selection of steel grades for bridge bearing components. However, these recommendations are no longer fully aligned with more recent insights: Designing bridge bearings and their components in accordance with DIN EN 1337 and the relevant sections of DIN EN 1993 has led to an increasing trend of using large plate thicknesses, especially for long-span bridges. However, these plate thicknesses surpass the application limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with the regulations outlined in DIN EN 1993-1-10 regarding material toughness and through-thickness properties requires some further modifications. Therefore, these standards cannot be directly applied to the material selection for bearings without additional information. In addition, recent findings indicate that certain bridge bearing components are subjected to high fatigue loads, necessitating consideration in structural design, material selection, and calculations. To address this issue, the German Center for Rail Traffic Research initiated a research project aimed at developing a proposal to enhance the existing standards. This proposal seeks to establish guidelines for the selection of steel materials for bridge bearings to prevent brittle fracture, particularly for thick plates and components exposed to specific fatigue loads. The results derived from theoretical analyses, including finite element simulations and analytical calculations, are verified through component testing on a large-scale. During these large-scale tests, where a brittle failure is deliberately induced in a bearing component, an artificially generated defect is introduced into the specimen at the predetermined hotspot. Subsequently, a dynamic load is imposed until the crack initiation process transpires, replicating realistic conditions akin to a sharp notch resembling a fatigue crack. To stop the action of the dynamic load in time, it is important to precisely determine the point at which the crack size transitions from stable crack growth to unstable crack growth. To achieve this, the potential drop measurement method is employed. The proposed paper informs about the choice of measurement method (alternating current potential drop (ACPD) or direct current potential drop (DCPD)), presents results from correlations with created FE models, and may proposes a new approach to introduce beach marks into the fracture surface within the framework of potential drop measurement.Keywords: beach marking, bridge bearing design, brittle fracture, design for fatigue, potential drop
Procedia PDF Downloads 469457 Numerical Investigation on Feasibility of Electromagnetic Wave as Water Hardness Detection in Water Cooling System Industrial
Authors: K. H. Teng, A. Shaw, M. Ateeq, A. Al-Shamma'a, S. Wylie, S. N. Kazi, B. T. Chew
Abstract:
Numerical and experimental of using novel electromagnetic wave technique to detect water hardness concentration has been presented in this paper. Simulation is powerful and efficient engineering methods which allow for a quick and accurate prediction of various engineering problems. The RF module is used in this research to predict and design electromagnetic wave propagation and resonance effect of a guided wave to detect water hardness concentration in term of frequency domain, eigenfrequency, and mode analysis. A cylindrical cavity resonator is simulated and designed in the electric field of fundamental mode (TM010). With the finite volume method, the three-dimensional governing equations were discretized. Boundary conditions for the simulation were the cavity materials like aluminum, two ports which include transmitting and receiving port, and assumption of vacuum inside the cavity. The design model was success to simulate a fundamental mode and extract S21 transmission signal within 2.1 – 2.8 GHz regions. The signal spectrum under effect of port selection technique and dielectric properties of different water concentration were studied. It is observed that the linear increment of magnitude in frequency domain when concentration increase. The numerical results were validated closely by the experimentally available data. Hence, conclusion for the available COMSOL simulation package is capable of providing acceptable data for microwave research.Keywords: electromagnetic wave technique, frequency domain, signal spectrum, water hardness concentration
Procedia PDF Downloads 2759456 Applied Methods for Lightweighting Structural Systems
Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi
Abstract:
With gravity load reduction in the structural and non-structural components, the lightweight construction will be achieved as well as the improvement of efficiency and functional specifications. The advantages of lightweight construction can be examined in two levels. The first is the mass reduction of load bearing structure which results in increasing internal useful space and the other one is the mass reduction of building which decreases the effects of seismic load as a result. In order to achieve this goal, the essential building materials specifications and also optimum load bearing geometry of structural systems and elements have to be considered, so lightweight materials selection particularly with lightweight aggregate for building components will be the first step of lightweight construction. In the next step, in addition to selecting the prominent samples of Iran's traditional architecture, the process of these works improvement is analyzed through the viewpoints of structural efficiency and lightweighting and also the practical methods of lightweight construction have been extracted. The optimum design of load bearing geometry of structural system has to be considered not only in the structural system elements, but also in their composition and the selection of dimensions, proportions, forms and optimum orientations, can lead to get a maximum materials efficiency for loads and stresses bearing.Keywords: gravity load, lightweighting structural system, load bearing geometry, seismic behavior
Procedia PDF Downloads 5259455 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 1919454 Event Related Brain Potentials Evoked by Carmen in Musicians and Dancers
Authors: Hanna Poikonen, Petri Toiviainen, Mari Tervaniemi
Abstract:
Event-related potentials (ERPs) evoked by simple tones in the brain have been extensively studied. However, in reality the music surrounding us is spectrally and temporally complex and dynamic. Thus, the research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation, which, in various forms, has always been an essential part of different cultures. In addition to sensory responses, music elicits vast cognitive and emotional processes in the brain. When compared to laymen, professional musicians have stronger ERP responses in processing individual musical features in simple tone sequences, such as changes in pitch, timbre and harmony. Here we show that the ERP responses evoked by rapid changes in individual musical features are more intense in musicians than in laymen, also while listening to long excerpts of the composition Carmen. Interestingly, for professional dancers, the amplitudes of the cognitive P300 response are weaker than for musicians but still stronger than for laymen. Also, the cognitive P300 latencies of musicians are significantly shorter whereas the latencies of laymen are significantly longer. In contrast, sensory N100 do not differ in amplitude or latency between musicians and laymen. These results, acquired from a novel ERP methodology for natural music, suggest that we can take the leap of studying the brain with long pieces of natural music also with the ERP method of electroencephalography (EEG), as has already been made with functional magnetic resonance (fMRI), as these two brain imaging devices complement each other.Keywords: electroencephalography, expertise, musical features, real-life music
Procedia PDF Downloads 4859453 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects
Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad
Abstract:
Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.Keywords: BRT, costs, monorail, pollution, tram
Procedia PDF Downloads 1819452 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique
Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu
Abstract:
Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing
Procedia PDF Downloads 103