Search results for: Maximum Entropy Bootstrapping approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17253

Search results for: Maximum Entropy Bootstrapping approach

16503 Nalanda ‘School of Joy’: Teaching Learning Strategies and Support System, for Implementing Child-Friendly Education in Bangladesh

Authors: Sufia Ferdousi

Abstract:

Child-friendly education (CFE) is very important for the children, especially the early year’s students, because it fosters the holistic development of a child. Teacher plays a key role in creating child-friendly education. This study intends to learn about child-friendly education in Bangladesh. The purpose of the study is to explore how CFE is being practiced in Bangladesh. The study attempted to fulfill the purpose through case study investigation. One school, named Nalanda, was selected for the study as it claims to run the school through CFE approach. The objective of the study was to identify, how this school is different from the other schools in Bangladesh, to explore overall teaching learning system like, curriculum, teaching strategies, assessments and to investigate the support system for Child Friendly Education provided to the teachers through training or mentoring. The nature of the case study was qualitative method to get maximum information from the students, parents, teachers and school authorities. The findings were based on 3 classroom observations, interviews with 1 teacher, 1 head teacher and 1 trainer, FGD with 10 students and 6 parents, were used to collect the data. It has been found that Nalanda is different than the other schools in Bangladesh in terms of, parents’ motivation about school curriculum, and sufficiency of teachers’ knowledge on joyful learning/child-friendly learning. The students took part in the extracurricular activities alongside the national curriculum. Teachers showed particular strength in the teaching learning strategies, using materials and assessment. And Nalanda gives strong support for teacher’s training. In conclusion, The Nalanda School in Dhaka was found appropriate for the requirements of Child-friendly education.

Keywords: child friendly education, overall teaching learning system, the requirements of child-friendly education, the alternative education approach

Procedia PDF Downloads 229
16502 Suitability of Black Box Approaches for the Reliability Assessment of Component-Based Software

Authors: Anjushi Verma, Tirthankar Gayen

Abstract:

Although, reliability is an important attribute of quality, especially for mission critical systems, yet, there does not exist any versatile model even today for the reliability assessment of component-based software. The existing Black Box models are found to make various assumptions which may not always be realistic and may be quite contrary to the actual behaviour of software. They focus on observing the manner in which the system behaves without considering the structure of the system, the components composing the system, their interconnections, dependencies, usage frequencies, etc.As a result, the entropy (uncertainty) in assessment using these models is much high.Though, there are some models based on operation profile yet sometimes it becomes extremely difficult to obtain the exact operation profile concerned with a given operation. This paper discusses the drawbacks, deficiencies and limitations of Black Box approaches from the perspective of various authors and finally proposes a conceptual model for the reliability assessment of software.

Keywords: black box, faults, failure, software reliability

Procedia PDF Downloads 433
16501 Effect of Fast and Slow Tempo Music on Muscle Endurance Time

Authors: Rohit Kamal, Devaki Perumal Rajaram, Rajam Krishna, Sai Kumar Pindagiri, Silas Danielraj

Abstract:

Introduction: According to WHO, Global health observatory at least 2.8 million people die each year because of obesity and overweight. This is mainly because of the adverse metabolic effects of obesity and overweight on blood pressure, lipid profile especially cholesterol and insulin resistance. To achieve optimum health WHO has set the BMI in the range of 18.5 to 24.9 kg/m2. Due to modernization of life style, physical exercise in the form of work is no longer a possibility and hence an effective way to burn out calories to achieve the optimum BMI is the need of the hour. Studies have shown that exercising for more than 60 minutes /day helps to maintain the weight and to reduce the weight exercise should be done for 90 minutes a day. Moderate exercise for about 30 min is essential for burning up of calories. People with low endurance fail to perform even the low intensity exercise for minimal time. Hence, it is necessary to find out some effective method to increase the endurance time. Methodology: This study was approved by the Institutional Ethical committee of our college. After getting written informed consent, 25 apparently healthy males between the age group 18-20 years were selected. Subjects are with muscular disorder, subjects who are Hypertensive, Diabetes, Smokers, Alcoholics, taking drugs affecting the muscle strength. To determine the endurance time: Maximum voluntary contraction (MVC) was measured by asking the participants to squeeze the hand grip dynamometer as hard as possible and hold it for 3 seconds. This procedure was repeated thrice and the average of the three reading was taken as the maximum voluntary contraction. The participant was then asked to squeeze the dynamometer and hold it at 70% of the maximum voluntary contraction while hearing fast tempo music which was played for about ten minutes then the participant was asked to relax for ten minutes and was made to hold the hand grip dynamometer at 70% of the maximum voluntary contraction while hearing slow tempo music. To avoid the bias of getting habituated to the procedure the order of hearing for the fast and slow tempo music was changed. The time for which they can hold it at 70% of MVC was determined by using a stop watch and that was taken as the endurance time. Results: The mean value of the endurance time during fast and slow tempo music was compared in all the subjects. The mean MVC was 34.92 N. The mean endurance time was 21.8 (16.3) seconds with slow tempo music which was more then with fast tempo music with which the mean endurance time was 20.6 (11.7) seconds. The preference was more for slow tempo music then for fast tempo music. Conclusion: Music when played during exercise by some unknown mechanism helps to increase the endurance time by alleviating the symptoms of lactic acid accumulation.

Keywords: endurance time, fast tempo music, maximum voluntary contraction, slow tempo music

Procedia PDF Downloads 284
16500 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure

Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser

Abstract:

Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.

Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model

Procedia PDF Downloads 427
16499 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 477
16498 Growth and Yield Potential of Quinoa genotypes on Salt Affected Soils

Authors: Shahzad M. A. Basra, Shahid Iqbal, Irfan Afzal, Hafeez-ur-Rehman

Abstract:

Quinoa a facultative halophyte crop plant is a new introduction in Pakistan due to its superior nutritional profile and its abiotic stress tolerance, especially against salinity. Present study was conducted to explore halophytic behavior of quinoa. Four quinoa genotypes (A1, A2, A7 and A9) were evaluated against high salinity (control, 100, 200, 300 and 400 mM). Evaluation was made on the basis of ionic analysis (Na+, K+ and K+: Na+ ratio in shoot) and root- shoot fresh and dry weight at four leaf stage. Seedling growth i.e. fresh and dry weight of shoot and root increased by 100 mM salinity and then growth decreased gradually with increasing salinity level in all geno types. Mineral analysis indicated that A2 and A7 have more tolerant behavior having low Na+ and high K+ ¬concentration as compared to A1 and A9. Same geno types as above were also evaluated against high salinity (control, 10, 20, 30, and 40 dS m-1) in pot culture during 2012-13. It was found that increase in salinity up to 10 dS m-1 the plant height, stem diameter and yield related traits increased but decreased with further increase in salinity. Same trend was observed in ionic contents. Maximum grain yield was achieved by A7 (100 g plant-1) followed by A2 (82 g plant-1) at salinity level 10 dS m-1. Next phase was carried out through field settings by using salt tolerant geno types (A2 and A7) at Crop Physiology Research Area Farm (non saline soil as control)/ Proka Farm (salt affected with EC up to 15 dS m-1), University of Agriculture, Faisalabad and Soil Salinity Research Institute, Pindi Bhtiaan (SSRI) Farm (one normal as control and two salt affected fields with EC values up to 15 and 30 dS m-1) during 2013-14. Genotype A7 showed maximum growth and gave maximum yield (3200 kg ha-1) at Proka Farm which was statistically at par to the values of yield obtained on normal soils of Faisalabad. Geno type A7 also gave maximum yield 2800 kg ha-1 on normal field of Pindi bhtiaan followed by as obtained (2340) on salt problem field (15 dS m-1) of same location.

Keywords: quinoa, salinity, halophyte, genotype

Procedia PDF Downloads 551
16497 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter

Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh

Abstract:

Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.

Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions

Procedia PDF Downloads 111
16496 Reduced Power Consumption by Randomization for DSI3

Authors: David Levy

Abstract:

The newly released Distributed System Interface 3 (DSI3) Bus Standard specification defines 3 modulation levels from which 16 valid symbols are coded. This structure creates power consumption variations depending on the transmitted data of a factor of more than 2 between minimum and maximum. The power generation unit has to consider therefore the worst case maximum consumption all the time and be built accordingly. This paper proposes a method to reduce both the average current consumption and worst case current consumption. The transmitter randomizes the data using several pseudo-random sequences. It then estimates the energy consumption of the generated frames and selects to transmit the one which consumes the least. The transmitter also prepends the index of the pseudo-random sequence, which is not randomized, to allow the receiver to recover the original data using the correct sequence. We show that in the case that the frame occupies most of the DSI3 synchronization period, we achieve average power consumption reduction by up to 13% and the worst case power consumption is reduced by 17.7%.

Keywords: DSI3, energy, power consumption, randomization

Procedia PDF Downloads 519
16495 Architecture for QoS Based Service Selection Using Local Approach

Authors: Gopinath Ganapathy, Chellammal Surianarayanan

Abstract:

Services are growing rapidly and generally they are aggregated into a composite service to accomplish complex business processes. There may be several services that offer the same required function of a particular task in a composite service. Hence a choice has to be made for selecting suitable services from alternative functionally similar services. Quality of Service (QoS)plays as a discriminating factor in selecting which component services should be selected to satisfy the quality requirements of a user during service composition. There are two categories of approaches for QoS based service selection, namely global and local approaches. Global approaches are known to be Non-Polynomial (NP) hard in time and offer poor scalability in large scale composition. As an alternative to global methods, local selection methods which reduce the search space by breaking up the large/complex problem of selecting services for the workflow into independent sub problems of selecting services for individual tasks are coming up. In this paper, distributed architecture for selecting services based on QoS using local selection is presented with an overview of local selection methodology. The architecture describes the core components, namely, selection manager and QoS manager needed to implement the local approach and their functions. Selection manager consists of two components namely constraint decomposer which decomposes the given global or workflow level constraints in local or task level constraints and service selector which selects appropriate service for each task with maximum utility, satisfying the corresponding local constraints. QoS manager manages the QoS information at two levels namely, service class level and individual service level. The architecture serves as an implementation model for local selection.

Keywords: architecture of service selection, local method for service selection, QoS based service selection, approaches for QoS based service selection

Procedia PDF Downloads 412
16494 Analysis of User Data Usage Trends on Cellular and Wi-Fi Networks

Authors: Jayesh M. Patel, Bharat P. Modi

Abstract:

The availability of on mobile devices that can invoke the demonstrated that the total data demand from users is far higher than previously articulated by measurements based solely on a cellular-centric view of smart-phone usage. The ratio of Wi-Fi to cellular traffic varies significantly between countries, This paper is shown the compression between the cellular data usage and Wi-Fi data usage by the user. This strategy helps operators to understand the growing importance and application of yield management strategies designed to squeeze maximum returns from their investments into the networks and devices that enable the mobile data ecosystem. The transition from unlimited data plans towards tiered pricing and, in the future, towards more value-centric pricing offers significant revenue upside potential for mobile operators, but, without a complete insight into all aspects of smartphone customer behavior, operators will unlikely be able to capture the maximum return from this billion-dollar market opportunity.

Keywords: cellular, Wi-Fi, mobile, smart phone

Procedia PDF Downloads 346
16493 Effect of Filter Paper Technique in Measuring Hydraulic Capacity of Unsaturated Expansive Soil

Authors: Kenechi Kurtis Onochie

Abstract:

This paper shows the use of filter paper technique in the measurement of matric suction of unsaturated expansive soil around the Haspolat region of Lefkosa, North Cyprus in other to establish the soil water characteristics curve (SWCC) or soil water retention curve (SWRC). The dry filter paper approach which is standardized by ASTM, 2003, D 5298-03 in which the filter paper is initially dry was adopted. The whatman No. 42 filter paper was used in the matric suction measurement. The maximum dry density of the soil was obtained as 2.66kg/cm³ and the optimum moisture content as 21%. The soil was discovered to have high air entry value of 1847.46KPa indicating finer particles and 25% hydraulic capacity using filter paper technique. The filter paper technique proved to be very useful for measuring the hydraulic capacity of unsaturated expansive soil.

Keywords: SWCC, matric suction, filter paper, expansive soil

Procedia PDF Downloads 152
16492 Estimation of Stress-Strength Parameter for Burr Type XII Distribution Based on Progressive Type-II Censoring

Authors: A. M. Abd-Elfattah, M. H. Abu-Moussa

Abstract:

In this paper, the estimation of stress-strength parameter R = P(Y < X) is considered when X; Y the strength and stress respectively are two independent random variables of Burr Type XII distribution. The samples taken for X and Y are progressively censoring of type II. The maximum likelihood estimator (MLE) of R is obtained when the common parameter is unknown. But when the common parameter is known the MLE, uniformly minimum variance unbiased estimator (UMVUE) and the Bayes estimator of R = P(Y < X) are obtained. The exact con dence interval of R based on MLE is obtained. The performance of the proposed estimators is compared using the computer simulation.

Keywords: Burr Type XII distribution, progressive type-II censoring, stress-strength model, unbiased estimator, maximum-likelihood estimator, uniformly minimum variance unbiased estimator, confidence intervals, Bayes estimator

Procedia PDF Downloads 441
16491 Study of Rehydration Process of Dried Squash (Cucurbita pepo) at Different Temperatures and Dry Matter-Water Ratios

Authors: Sima Cheraghi Dehdezi, Nasser Hamdami

Abstract:

Air-drying is the most widely employed method for preserving fruits and vegetables. Most of the dried products must be rehydrated by immersion in water prior to their use, so the study of rehydration kinetics in order to optimize rehydration phenomenon has great importance. Rehydration typically composes of three simultaneous processes: the imbibition of water into dried material, the swelling of the rehydrated products and the leaching of soluble solids to rehydration medium. In this research, squash (Cucurbita pepo) fruits were cut into 0.4 cm thick and 4 cm diameter slices. Then, squash slices were blanched in a steam chamber for 4 min. After cooling to room temperature, squash slices were dehydrated in a hot air dryer, under air flow 1.5 m/s and air temperature of 60°C up to moisture content of 0.1065 kg H2O per kg d.m. Dehydrated samples were kept in polyethylene bags and stored at 4°C. Squash slices with specified weight were rehydrated by immersion in distilled water at different temperatures (25, 50, and 75°C), various dry matter-water ratios (1:25, 1:50, and 1:100), which was agitated at 100 rpm. At specified time intervals, up to 300 min, the squash samples were removed from the water, and the weight, moisture content and rehydration indices of the sample were determined.The texture characteristics were examined over a 180 min period. The results showed that rehydration time and temperature had significant effects on moisture content, water absorption capacity (WAC), dry matter holding capacity (DHC), rehydration ability (RA), maximum force and stress in dried squash slices. Dry matter-water ratio had significant effect (p˂0.01) on all squash slice properties except DHC. Moisture content, WAC and RA of squash slices increased, whereas DHC and texture firmness (maximum force and stress) decreased with rehydration time. The maximum moisture content, WAC and RA and the minimum DHC, force and stress, were observed in squash slices rehydrated into 75°C water. The lowest moisture content, WAC and RA and the highest DHC, force and stress, were observed in squash slices immersed in water at 1:100 dry matter-water ratio. In general, for all rehydration conditions of squash slices, the highest water absorption rate occurred during the first minutes of process. Then, this rate decreased. The highest rehydration rate and amount of water absorption occurred in 75°C.

Keywords: dry matter-water ratio, squash, maximum force, rehydration ability

Procedia PDF Downloads 302
16490 Neuro-Fuzzy Based Model for Phrase Level Emotion Understanding

Authors: Vadivel Ayyasamy

Abstract:

The present approach deals with the identification of Emotions and classification of Emotional patterns at Phrase-level with respect to Positive and Negative Orientation. The proposed approach considers emotion triggered terms, its co-occurrence terms and also associated sentences for recognizing emotions. The proposed approach uses Part of Speech Tagging and Emotion Actifiers for classification. Here sentence patterns are broken into phrases and Neuro-Fuzzy model is used to classify which results in 16 patterns of emotional phrases. Suitable intensities are assigned for capturing the degree of emotion contents that exist in semantics of patterns. These emotional phrases are assigned weights which supports in deciding the Positive and Negative Orientation of emotions. The approach uses web documents for experimental purpose and the proposed classification approach performs well and achieves good F-Scores.

Keywords: emotions, sentences, phrases, classification, patterns, fuzzy, positive orientation, negative orientation

Procedia PDF Downloads 360
16489 Development of IDF Curves for Precipitation in Western Watershed of Guwahati, Assam

Authors: Rajarshi Sharma, Rashidul Alam, Visavino Seleyi, Yuvila Sangtam

Abstract:

The Intensity-Duration-Frequency (IDF) relationship of rainfall amounts is one of the most commonly used tools in water resources engineering for planning, design and operation of water resources project, or for various engineering projects against design floods. The establishment of such relationships was reported as early as in 1932 (Bernard). Since then many sets of relationships have been constructed for several parts of the globe. The objective of this research is to derive IDF relationship of rainfall for western watershed of Guwahati, Assam. These relationships are useful in the design of urban drainage works, e.g. storm sewers, culverts and other hydraulic structures. In the study, rainfall depth for 10 years viz. 2001 to 2010 has been collected from the Regional Meteorological Centre Borjhar, Guwahati. Firstly, the data has been used to construct the mass curve for duration of more than 7 hours rainfall to calculate the maximum intensity and to form the intensity duration curves. Gumbel’s frequency analysis technique has been used to calculate the probable maximum rainfall intensities for a period of 2 yr, 5 yr, 10 yr, 50 yr, 100 yr from the maximum intensity. Finally, regression analysis has been used to develop the intensity-duration-frequency (IDF) curve. Thus, from the analysis the values for the constants ‘a’,‘b’ &‘c’ have been found out. The values of ‘a’ for which the sum of the squared deviation is minimum has been found out to be 40 and when the corresponding value of ‘c’ and ‘b’ for the minimum squared deviation of ‘a’ are 0.744 and 1981.527 respectively. The results obtained showed that in all the cases the correlation coefficient is very high indicating the goodness of fit of the formulae to estimate IDF curves in the region of interest.

Keywords: intensity-duration-frequency relationship, mass curve, regression analysis, correlation coefficient

Procedia PDF Downloads 223
16488 Survey of the Role of Contextualism in the Designing of Cultural Constructions Based on Rapoport Views

Authors: E. Zarei, M. Bazaei, A. Seifi, A. Keshavarzi

Abstract:

Amos Rapoport, based on his anthropology approach, believed that the space origins from the human body and influences on human body mutually. As a holistic approach in architecture, Contextualism describes a collection of views in philosophy which emphasize the context in which an action, utterance, or expression occurs, and argues that, in some important respect, the action, utterance, or expression can only be understood relative to that context. In this approach, the main goal – studying the role of cultural component in the Contextualism construction shaping up, based on Amos Rapoport’s anthropology approach- has being done by descriptive- analytic method. The results of the research indicate that in the field of Contextualism designing, referring to the cultural aspects are as necessary as the physical dimensions of a construction. Rapoport believes that the shape of a construction is influenced by cultural aspects and he suggests a kind of mutual interaction between human and environment that should be considered in housing. The mail goal of contextual architecture is to establish an interaction between environment, human and culture. According to this approach, a desirable design should be in harmony with this approach.

Keywords: Amos Rapoport, anthropology, contextual architecture, culture

Procedia PDF Downloads 384
16487 From User's Requirements to UML Class Diagram

Authors: Zeineb Ben Azzouz, Wahiba Ben Abdessalem Karaa

Abstract:

The automated extraction of UML class diagram from natural language requirements is a highly challenging task. Many approaches, frameworks and tools have been presented in this field. Nonetheless, the experiments of these tools have shown that there is no approach that can work best all the time. In this context, we propose a new accurate approach to facilitate the automatic mapping from textual requirements to UML class diagram. Our new approach integrates the best properties of statistical Natural Language Processing (NLP) techniques to reduce ambiguity when analysing natural language requirements text. In addition, our approach follows the best practices defined by conceptual modelling experts to determine some patterns indispensable for the extraction of basic elements and concepts of the class diagram. Once the relevant information of class diagram is captured, a XMI document is generated and imported with a CASE tool to build the corresponding UML class diagram.

Keywords: class diagram, user’s requirements, XMI, software engineering

Procedia PDF Downloads 453
16486 Influence of Composite Adherents Properties on the Dynamic Behavior of Double Lap Bonded Joint

Authors: P. Saleh, G. Challita, R. Hazimeh, K. Khalil

Abstract:

In this paper 3D FEM analysis was carried out on double lap bonded joint with composite adherents subjected to dynamic shear. The adherents are made of Carbon/Epoxy while the adhesive is epoxy Araldite 2031. The maximum average shear stress and the stress homogeneity in the adhesive layer were examined. Three fibers textures were considered: UD; 2.5D and 3D with same volume fiber then a parametric study based on changing the thickness and the type of fibers texture in 2.5D was accomplished. Moreover, adherents’ dissimilarity was also investigated. It was found that the main parameter influencing the behavior is the longitudinal stiffness of the adherents. An increase in the adherents’ longitudinal stiffness induces an increase in the maximum average shear stress in the adhesive layer and an improvement in the shear stress homogeneity within the joint. No remarkable improvement was observed for dissimilar adherents.

Keywords: adhesive, composite adherents, impact shear, finite element

Procedia PDF Downloads 426
16485 Using Gene Expression Programming in Learning Process of Rough Neural Networks

Authors: Sanaa Rashed Abdallah, Yasser F. Hassan

Abstract:

The paper will introduce an approach where a rough sets, gene expression programming and rough neural networks are used cooperatively for learning and classification support. The Objective of gene expression programming rough neural networks (GEP-RNN) approach is to obtain new classified data with minimum error in training and testing process. Starting point of gene expression programming rough neural networks (GEP-RNN) approach is an information system and the output from this approach is a structure of rough neural networks which is including the weights and thresholds with minimum classification error.

Keywords: rough sets, gene expression programming, rough neural networks, classification

Procedia PDF Downloads 357
16484 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 60
16483 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 193
16482 Self-Organization-Based Approach for Embedded Real-Time System Design

Authors: S. S. Bendib, L. W. Mouss, S. Kalla

Abstract:

This paper proposes a self-organization-based approach for real-time systems design. The addressed issue is the mapping of an application onto an architecture of heterogeneous processors while optimizing both makespan and reliability. Since this problem is NP-hard, a heuristic algorithm is used to obtain efficiently approximate solutions. The proposed approach takes into consideration the quality as well as the diversity of solutions. Indeed, an alternate treatment of the two objectives allows to produce solutions of good quality while a self-organization approach based on the neighborhood structure is used to reorganize solutions and consequently to enhance their diversity. Produced solutions make different compromises between the makespan and the reliability giving the user the possibility to select the solution suited to his (her) needs.

Keywords: embedded real-time systems design, makespan, reliability, self-organization, compromises

Procedia PDF Downloads 116
16481 Tuning Cubic Equations of State for Supercritical Water Applications

Authors: Shyh Ming Chern

Abstract:

Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.

Keywords: equation of state, EoS, supercritical water, SCW

Procedia PDF Downloads 514
16480 Growth Performance and Nutrient Digestibility of Cirrhinus mrigala Fingerlings Fed on Sunflower Meal Based Diet Supplemented with Phytase

Authors: Syed Makhdoom Hussain, Muhammad Afzal, Farhat Jabeen, Arshad Javid, Tasneem Hameed

Abstract:

A feeding trial was conducted with Cirrhinus mrigala fingerlings to study the effects of microbial phytase with graded levels (0, 500, 1000, 1500, and 2000 FTUkg-1) by sunflower meal based diet on growth performance and nutrient digestibility. The chromic oxide was added as an indigestible marker in the diets. Three replicate groups of 15 fish (Average wt 5.98 g fish-1) were fed once a day and feces were collected twice daily. The results of present study showed improved growth and feed performance of Cirrhinus mrigala fingerlings in response to phytase supplementation. Maximum growth performance was obtained by the fish fed on test diet-III having 1000 FTU kg-1 phytase level. Similarly, nutrient digestibility was also significantly increased (p<0.05) by phytase supplementation. Digestibility coefficients for sunflower meal based diet increased 15.76%, 17.70%, and 12.70% for crude protein, crude fat and apparent gross energy as compared to the reference diet, respectively at 1000 FTU kg-1 level. Again, maximum response of nutrient digestibility was recorded at the phytase level of 1000 FTU kg-1 diet. It was concluded that the phytase supplementation to sunflower meal based diet at 1000 FTU kg-1 level is optimum to release adequate chelated nutrients for maximum growth performance of C. mrigala fingerlings. Our results also suggested that phytase supplementation to sunflower meal based diet can help in the development of sustainable aquaculture by reducing the feed cost and nutrient discharge through feces in the aquatic ecosystem.

Keywords: sunflower meal, Cirrhinus mrigala, growth, nutrient digestibility, phytase

Procedia PDF Downloads 286
16479 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases

Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha

Abstract:

Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.

Keywords: feature fusion, image retrieval, membership function, normalization

Procedia PDF Downloads 330
16478 An Efficient Hybrid Approach Based on Multi-Agent System and Emergence Method for the Integration of Systematic Preventive Maintenance Policies

Authors: Abdelhadi Adel, Kadri Ouahab

Abstract:

This paper proposes a hybrid algorithm for the integration of systematic preventive maintenance policies in hybrid flow shop scheduling to minimize makespan. We have implemented a problem-solving approach for optimizing the processing time, methods based on metaheuristics. The proposed approach is inspired by the behavior of the human body. This hybridization is between a multi-agent system and inspirations of the human body, especially genetics. The effectiveness of our approach has been demonstrated repeatedly in this paper. To solve such a complex problem, we proposed an approach which we have used advanced operators such as uniform crossover set and single point mutation. The proposed approach is applied to three preventive maintenance policies. These policies are intended to maximize the availability or to maintain a minimum level of reliability during the production chain. The results show that our algorithm outperforms existing algorithms. We assumed that the machines might be unavailable periodically during the production scheduling.

Keywords: multi-agent systems, emergence, genetic algorithm, makespan, systematic maintenance, scheduling, hybrid flow shop scheduling

Procedia PDF Downloads 317
16477 Post-Quantum Resistant Edge Authentication in Large Scale Industrial Internet of Things Environments Using Aggregated Local Knowledge and Consistent Triangulation

Authors: C. P. Autry, A. W. Roscoe, Mykhailo Magal

Abstract:

We discuss the theoretical model underlying 2BPA (two-band peer authentication), a practical alternative to conventional authentication of entities and data in IoT. In essence, this involves assembling a virtual map of authentication assets in the network, typically leading to many paths of confirmation between any pair of entities. This map is continuously updated, confirmed, and evaluated. The value of authentication along multiple disjoint paths becomes very clear, and we require analogues of triangulation to extend authentication along extended paths and deliver it along all possible paths. We discover that if an attacker wants to make an honest node falsely believe she has authenticated another, then the length of the authentication paths is of little importance. This is because optimal attack strategies correspond to minimal cuts in the authentication graph and do not contain multiple edges on the same path. The authentication provided by disjoint paths normally is additive (in entropy).

Keywords: authentication, edge computing, industrial IoT, post-quantum resistance

Procedia PDF Downloads 181
16476 Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers

Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi

Abstract:

Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping.

Keywords: planet image, land cover mapping, rectification, neural network classification, multilayer perceptron, soft classifiers, hard classifiers

Procedia PDF Downloads 168
16475 QUALIFYING AGGREGATES PRODUCED IN KANO-NIGERIA FOR USE IN SUPERPAVE DESIGN METHOD

Authors: Ahmad Idris, Bishir Kado, Murtala Umar, Armaya`u Suleiman Labo

Abstract:

Superpave is the short form of Superior Performing Asphalt Pavement and represents a basis for specifying component materials, asphalt mixture design and analysis, and pavement performance prediction. This new technology is the result of long research projects conducted by the strategic Highway Research program (SHRP) of the Federal Highway Administration. This research was aimed at examining the suitability of Aggregates found in Kano for used in Superpave design method. Aggregates samples were collected from different sources in Kano Nigeria and their Engineering properties, as they relate to the SUPERPAVE design requirements were determined. The average result of Coarse Aggregate Angularity in Kano was found to be 87% and 86% of one fractured face and two or more fractured faces respectively with a standard of 80% and 85% respectively. Fine Aggregate Angularity average result was found to be 47% with a requirement of 45% minimum. A flat and elongated particle which was found to be 10% has a maximum criterion of 10%. Sand equivalent was found to be 51% with the criteria of 45% minimum. Strength tests were also carried out, and the results reflect the requirements of the standards. The tests include Impact value test, Aggregate crushing value, and Aggregate Abrasion tests and the results are 27.5%, 26.7%, and 13%, respectively, with the maximum criteria of 30%. Specific gravity was also carried out and the result was found to have an average value of 2.52 with a criterion of 2.6 to 2.9 and Water absorption was found to be 1.41% with maximum criteria of 0.6%. From the study, the result of the tests indicated that the aggregates properties has met the requirements of Superpave design method based on the specifications of ASTMD 5821, ASTM D 4791, AASHTO T176, AASHTO T33 and BS815.

Keywords: Superpave, aggregates, asphalt mix, Kano

Procedia PDF Downloads 375
16474 Multiscale Connected Component Labelling and Applications to Scientific Microscopy Image Processing

Authors: Yayun Hsu, Henry Horng-Shing Lu

Abstract:

In this paper, a new method is proposed to extending the method of connected component labeling from processing binary images to multi-scale modeling of images. By using the adaptive threshold of multi-scale attributes, this approach minimizes the possibility of missing those important components with weak intensities. In addition, the computational cost of this approach remains similar to that of the typical approach of component labeling. Then, this methodology is applied to grain boundary detection and Drosophila Brain-bow neuron segmentation. These demonstrate the feasibility of the proposed approach in the analysis of challenging microscopy images for scientific discovery.

Keywords: microscopic image processing, scientific data mining, multi-scale modeling, data mining

Procedia PDF Downloads 419