Search results for: parallel profiles
393 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data
Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding
Abstract:
The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)
Procedia PDF Downloads 151392 The Maps of Meaning (MoM) Consciousness Theory
Authors: Scott Andersen
Abstract:
Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.Keywords: consciousness, perception, prospection, embodiment
Procedia PDF Downloads 60391 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector
Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar
Abstract:
Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.Keywords: appliances efficiency improvement, energy star, market penetration, residential sector
Procedia PDF Downloads 285390 Metaphysics of the Unified Field of the Universe
Authors: Santosh Kaware, Dnyandeo Patil, Moninder Modgil, Hemant Bhoir, Debendra Behera
Abstract:
The Unified Field Theory has been an area of intensive research since many decades. This paper focuses on philosophy and metaphysics of unified field theory at Planck scale - and its relationship with super string theory and Quantum Vacuum Dynamic Physics. We examined the epistemology of questions such as - (1) what is the Unified Field of universe? (2) can it actually - (a) permeate the complete universe - or (b) be localized in bound regions of the universe - or, (c) extend into the extra dimensions? - -or (d) live only in extra dimensions? (3) What should be the emergent ontological properties of Unified field? (4) How the universe is manifesting through its Quantum Vacuum energies? (5) How is the space time metric coupled to the Unified field? We present a number of ansatz - which we outline below. It is proposed that the unified field possesses consciousness as well as a memory - a recording of past history - analogous to ‘Consistent Histories’ interpretation of quantum mechanics. We proposed Planck scale geometry of Unified Field with circle like topology and having 32 energy points on its periphery which are the connected to each other by 10 dimensional meta-strings which are sources for manifestation of different fundamentals forces and particles of universe through its Quantum Vacuum energies. It is also proposed that the sub energy levels of ‘Conscious Unified Field’ are used for the process of creation, preservation and rejuvenation of the universe over a period of time by means of negentropy. These epochs can be for the complete universe, or for localized regions such as galaxies or cluster of galaxies. It is proposed that Unified field operates through geometric patterns of its Quantum Vacuum energies - manifesting as various elementary particles by giving spins to zero point energy elements. Epistemological relationship between unified field theory and super-string theories is examined. Properties of ‘consciousness’ and 'memory' cascades from universe, into macroscopic objects - and further onto the elementary particles - via a fractal pattern. Other properties of fundamental particles - such as mass, charge, spin, iso-spin also spill out of such a cascade. The manifestations of the unified field can reach into the parallel universes or the ‘multi-verse’ and essentially have an existence independent of the space-time. It is proposed that mass, length, time scales of the unified theory are less than even the Planck scale - and can be called at a level which we call that of 'Super Quantum Gravity (SQG)'.Keywords: super string theory, Planck scale geometry, negentropy, super quantum gravity
Procedia PDF Downloads 275389 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly
Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota
Abstract:
Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.Keywords: creatine, dietetic supplement, elderly, resistance training
Procedia PDF Downloads 474388 An Experimental Study of Scalar Implicature Processing in Chinese
Authors: Liu Si, Wang Chunmei, Liu Huangmei
Abstract:
A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language
Procedia PDF Downloads 361387 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 95386 Maternal Exposure to Bisphenol A and Its Association with Birth Outcomes
Authors: Yi-Ting Chen, Yu-Fang Huang, Pei-Wei Wang, Hai-Wei Liang, Chun-Hao Lai, Mei-Lien Chen
Abstract:
Background: Bisphenol A (BPA) is commonly used in consumer products, such as inner coatings of cans and polycarbonated bottles. BPA is considered to be an endocrine disrupting substance (EDs) that affects normal human hormones and may cause adverse effects on human health. Pregnant women and fetuses are susceptible groups of endocrine disrupting substances. Prenatal exposure to BPA has been shown to affect the fetus through the placenta. Therefore, it is important to evaluate the potential health risk of fetal exposure to BPA during pregnancy. The aims of this study were (1) to determine the urinary concentration of BPA in pregnant women, and (2) to investigate the association between BPA exposure during pregnancy and birth outcomes. Methods: This study recruited 117 pregnant women and their fetuses from 2012 to 2014 from the Taiwan Maternal- Infant Cohort Study (TMICS). Maternal urine samples were collected in the third trimester and questionnaires were used to collect socio-demographic characteristics, eating habits and medical conditions of the participants. Information about birth outcomes of the fetus was obtained from medical records. As for chemicals analysis, BPA concentrations in urine were determined by off-line solid-phase extraction-ultra-performance liquid chromatography coupled with a Q-Tof mass spectrometer. The urinary concentrations were adjusted with creatinine. The association between maternal concentrations of BPA and birth outcomes was estimated using the logistic regression model. Results: The detection rate of BPA is 99%; the concentration ranges (μg/g) from 0.16 to 46.90. The mean (SD) BPA levels are 5.37(6.42) μg/g creatinine. The mean ±SD of the body weight, body length, head circumference, chest circumference and gestational age at birth are 3105.18 ± 339.53 g, 49.33 ± 1.90 cm, 34.16 ± 1.06 cm, 32.34 ± 1.37 cm and 38.58 ± 1.37 weeks, respectively. After stratifying the exposure levels into two groups by median, pregnant women in higher exposure group would have an increased risk of lower body weight (OR=0.57, 95%CI=0.271-1.193), smaller chest circumference (OR=0.70, 95%CI=0.335-1.47) and shorter gestational age at birth newborn (OR=0.46, 95%CI=0.191-1.114). However, there are no associations between BPA concentration and birth outcomes reach a significant level (p < 0.05) in statistics. Conclusions: This study presents prenatal BPA profiles and infants in northern Taiwan. Women who have higher BPA concentrations tend to give birth to lower body weight, smaller chest circumference or shorter gestational age at birth newborn. More data will be included to verify the results. This report will also present the predictors of BPA concentrations for pregnant women.Keywords: bisphenol A, birth outcomes, biomonitoring, prenatal exposure
Procedia PDF Downloads 143385 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems
Authors: Joachim F. Sartor
Abstract:
According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage
Procedia PDF Downloads 151384 Development of an Integrated Reaction Design for the Enzymatic Production of Lactulose
Authors: Natan C. G. Silva, Carlos A. C. Girao Neto, Marcele M. S. Vasconcelos, Luciana R. B. Goncalves, Maria Valderez P. Rocha
Abstract:
Galactooligosaccharides (GOS) are sugars with prebiotic function that can be synthesized chemically or enzymatically, and this last one can be promoted by the action of β-galactosidases. In addition to favoring the transgalactosylation reaction to form GOS, these enzymes can also catalyze the hydrolysis of lactose. A highly studied type of GOS is lactulose because it presents therapeutic properties and is a health promoter. Among the different raw materials that can be used to produce lactulose, whey stands out as the main by-product of cheese manufacturing, and its discarded is harmful to the environment due to the residual lactose present. Therefore, its use is a promising alternative to solve this environmental problem. Thus, lactose from whey is hydrolyzed into glucose and galactose by β-galactosidases. However, in order to favor the transgalactosylation reaction, the medium must contain fructose, due this sugar reacts with galactose to produce lactulose. Then, the glucose-isomerase enzyme can be used for this purpose, since it promotes the isomerization of glucose into fructose. In this scenario, the aim of the present work was first to develop β-galactosidase biocatalysts of Kluyveromyces lactis and to apply it in the integrated reactions of hydrolysis, isomerization (with the glucose-isomerase from Streptomyces murinus) and transgalactosylation reaction, using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% glutaraldehyde was evaluated using different enzymatic loads (2, 5, 7, 10, and 12 mg/g). Subsequently, the hydrolysis and transgalactosylation reactions were studied and conducted at 50°C, 120 RPM for 20 minutes. In parallel, the isomerization of glucose into fructose was evaluated under conditions of 70°C, 750 RPM for 90 min. After, the integration of the three processes for the production of lactulose was investigated. Among the evaluated loads, 7 mg/g was chosen because the best activity of the derivative (44.3 U/g) was obtained, being this parameter determinant for the reaction stages. The other parameters of immobilization yield (87.58%) and recovered activity (46.47%) were also satisfactory compared to the other conditions. Regarding the integrated process, 94.96% of lactose was converted, achieving 37.56 g/L and 37.97 g/L of glucose and galactose, respectively. In the isomerization step, conversion of 38.40% of glucose was observed, obtaining a concentration of 12.47 g/L fructose. In the transgalactosylation reaction was produced 13.15 g/L lactulose after 5 min. However, in the integrated process, there was no formation of lactulose, but it was produced other GOS at the same time. The high galactose concentration in the medium probably favored the reaction of synthesis of these other GOS. Therefore, the integrated process proved feasible for possible production of prebiotics. In addition, this process can be economically viable due to the use of an industrial residue as a substrate, but it is necessary a more detailed investigation of the transgalactosilation reaction.Keywords: beta-galactosidase, glucose-isomerase, galactooligosaccharides, lactulose, whey
Procedia PDF Downloads 141383 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors
Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami
Abstract:
Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.Keywords: fault diagnosis, fault location, integrated sensors, PV modules
Procedia PDF Downloads 224382 Consumer Over-Indebtedness in Germany: An Investigation of Key Determinants
Authors: Xiaojing Wang, Ann-Marie Ward, Tony Wall
Abstract:
The problem of over-indebtedness has increased since deregulation of the banking industry in the 1980s, and now it has become a major problem for most countries in Europe, including Germany. Consumer debt issues have attracted not only the attention of academics but also government and debt counselling institutions. Overall, this research aims to contribute to the knowledge gap regarding the causes of consumer over-indebtedness in Germany and to develop predictive models for assessing consumer over-indebtedness risk at consumer level. The situation of consumer over-indebtedness is serious in Germany. The relatively high level of social welfare support in Germany suggests that consumer debt problems are caused by other factors, other than just over-spending and income volatility. Prior literature suggests that the overall stability of the economy and level of welfare support for individuals from the structural environment contributes to consumers’ debt problems. In terms of cultural influence, the conspicuous consumption theory in consumer behaviour suggests that consumers would spend more than their means to be seen as similar profiles to consumers in a higher socio-economic class. This results in consumers taking on more debt than they can afford, and eventually becoming over-indebted. Studies have also shown that financial literacy is negatively related to consumer over-indebtedness risk. Whilst prior literature has examined structural and cultural influences respectively, no study has taken a collective approach. To address this gap, a model is developed to investigate the association between consumer over-indebtedness and proxies for influences from the structural and cultural environment based on the above theories. The model also controls for consumer demographic characteristics identified as being of influence in prior literature, such as gender and age, and adverse shocks, such as divorce or bereavement in the household. Benefiting from SOEP regional data, this study is able to conduct quantitative empirical analysis to test both structural and cultural influences at a localised level. Using German Socio-Economic Panel (SOEP) study data from 2006 to 2016, this study finds that social benefits, financial literacy and the existence of conspicuous consumption all contribute to being over-indebted. Generally speaking, the risk of becoming over-indebted is high when consumers are in a low-welfare community, have little awareness of their own financial situation and always over-spend. In order to tackle the problem of over-indebtedness, countermeasures can be taken, for example, increasing consumers’ financial awareness, and the level of welfare support. By analysing causes of consumer over-indebtedness in Germany, this study also provides new insights on the nature and underlying causes of consumer debt issues in Europe.Keywords: consumer, debt, financial literacy, socio-economic
Procedia PDF Downloads 212381 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 141380 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy
Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright
Abstract:
The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.Keywords: information entropy, communication in manufacturing, mass customisation, scheduling
Procedia PDF Downloads 245379 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 530378 The Geometrical Cosmology: The Projective Cast of the Collective Subjectivity of the Chinese Traditional Architectural Drawings
Authors: Lina Sun
Abstract:
Chinese traditional drawings related to buildings and construction apply a unique geometry differentiating with western Euclidean geometry and embrace a collection of special terminologies, under the category of tu (the Chinese character for drawing). This paper will on one side etymologically analysis the terminologies of Chinese traditional architectural drawing, and on the other side geometrically deconstruct the composition of tu and locate the visual narrative language of tu in the pictorial tradition. The geometrical analysis will center on selected series of Yang-shi-lei tu of the construction of emperors’ mausoleums in Qing Dynasty (1636-1912), and will also draw out the earlier architectural drawings and the architectural paintings such as the jiehua, and paintings on religious frescoes and tomb frescoes as the comparison. By doing these, this research will reveal that both the terminologies corresponding to different geometrical forms respectively indicate associations between architectural drawing and the philosophy of Chinese cosmology, and the arrangement of the geometrical forms in the visual picture plane facilitates expressions of the concepts of space and position in the geometrical cosmology. These associations and expressions are the collective intentions of architectural drawing evolving in the thousands of years’ tradition without breakage and irrelevant to the individual authorship. Moreover, the architectural tu itself as an entity, not only functions as the representation of the buildings but also express intentions and strengthen them by using the Chinese unique geometrical language flexibly and intentionally. These collective cosmological spatial intentions and the corresponding geometrical words and languages reveal that the Chinese traditional architectural drawing functions as a unique architectural site with subjectivity which exists parallel with buildings and express intentions and meanings by itself. The methodology and the findings of this research will, therefore, challenge the previous researches which treat architectural drawings just as the representation of buildings and understand the drawings more than just using them as the evidence to reconstruct the information of buildings. Furthermore, this research will situate architectural drawing in between the researches of Chinese technological tu and artistic painting, bridging the two academic areas which usually treated the partial features of architectural drawing separately. Beyond this research, the collective subjectivity of the Chinese traditional drawings will facilitate the revealing of the transitional experience from traditions to drawing modernity, where the individual subjective identities and intentions of architects arise. This research will root for the understanding both the ambivalence and affinity of the drawing modernity encountering the traditions.Keywords: Chinese traditional architectural drawing (tu), etymology of tu, collective subjectivity of tu, geometrical cosmology in tu, geometry and composition of tu, Yang-shi-lei tu
Procedia PDF Downloads 121377 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout
Authors: Diamant Irene, Dar Tamar
Abstract:
Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.Keywords: conservation of resources, burnout, time pressure, time perspective
Procedia PDF Downloads 176376 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 173375 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 144374 User Expectations and Opinions Related to Campus Wayfinding and Signage Design: A Case Study of Kastamonu University
Authors: Güllü Yakar, Adnan Tepecik
Abstract:
A university campus resembles an independent city that is spread over a wide area. Campuses that incorporate thousands of new domestic and international users at the beginning of every academic period also host scientific, cultural and sportive events, in addition to embodying regular users such as students and staff. Wayfinding and signage systems are necessary for the regulation of vehicular traffic, and they enable users’ to navigate without losing time or feeling anxiety. While designing the system or testing the functionality of it, opinions of existing users or likely behaviors of typical user profiles (personas) provide designers with insight. The purpose of this study is to identify the wayfinding attitudes and expectations of Kastamonu University Kuzeykent Campus users. This study applies a mixed method in which a questionnaire, developed by the researcher, constitute the quantitative phase of the study. The survey was carried out with 850 participants who filled a questionnaire form which was tested in terms of construct validity by using Exploratory Factor Analysis. While interpreting the data obtained, Chi-Square, T- Test and ANOVA analyses were applied as well as descriptive analyses such as frequency (f) and percentage (%) values. The results of this survey, which was conducted during the absence of systematic wayfinding signs in the campus, reveals the participants expectations for insertion of floor plans and wayfinding signs to indoors, maps to outdoors, symbols and color codes to the existing signs and for the adequate arrangement of those for the use of visually impaired people. The fact that there is a direct proportional relation between the length of institution membership and wayfinding competency within campus, leads to the conclusion that especially the new comers are in need of wayfinding signs. In order to determine the effectiveness of campus-wide wayfinding system implemented after the survey and in order to identify the further expectations of users in this respect, a semi-structured interview form developed by the researcher and assessments of 20 participants are compiled. Subjected to content analysis, this data constitute the qualitative dimension of the study. Research results indicate that despite the presence of the signs, the participants experienced either inability or stress while finding their way, showed tendency to receive help from others and needed outdoor maps and signs, in addition to bigger-sized texts.Keywords: environmental graphic design, environmental perception, wayfinding and signage design, wayfinding system
Procedia PDF Downloads 237373 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg
Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma
Abstract:
The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES
Procedia PDF Downloads 80372 The Influence of the Variety and Harvesting Date on Haskap Composition and Anti-Diabetic Properties
Authors: Aruma Baduge Kithma Hansanee De Silva
Abstract:
Haskap (Lonicera caerulea L.), also known as blue honeysuckle, is a recently commercialized berry crop in Canada. Haskap berries are rich in polyphenols, including anthocyanins, which are known for potential health-promoting effects. Cyanidin-3-O-glucoside (C3G) is the most prominent anthocyanin of haskap berries. Recent literature reveals the efficacy of C3G in reducing the risk of type 2 diabetes (T2D), which has become an increasingly common health issue around the world. The T2D is characterized as a metabolic disorder of hyperglycemia and insulin resistance. It has been demonstrated that C3G has anti-diabetic effects in various ways, including improvement in insulin sensitivity, and inhibition of activities of carbohydrate-hydrolyzing enzymes, including alpha-amylase and alpha-glucosidase. The goal of this study was to investigate the influence of variety and harvesting date on haskap composition, biological properties, and antidiabetic properties. The polyphenolic compounds present in four commercially grown haskap cultivars, Aurora, Rebecca, Larissa and Evie among five harvesting stages (H1-H5), were extracted separately in 80% ethanol and analyzed to characterize their phenolic profiles. The haskap berries contain different types of polyphenols including flavonoids and phenolic acids. Anthocyanin is the major type of flavonoid. C3G is the most prominent type of anthocyanin, which accounts for 79% of total anthocyanin in all extracts. The variety Larissa at H5 contained the highest average C3G content, and its ethanol extract had the highest (1212.3±63.9 mg/100g FW) while, Evie at H1 contained the lowest C3G content (96.9±40.4 mg/100g FW). The average C3G content of Larissa from H1 – H5 varies from 208 – 1212 mg/100g FW. Quarcetin-3-Rutinoside (Q3Rut) is the major type of flavonol and highest is observed in Rebecca at H4 (47.81 mg/100g FW). The haskap berries also contained phenolic acids, but approximately 95% of the phenolic acids consisted of chlorogenic acid. The cultivar Larissa has a higher level of anthocyanin than the other four cultivars. The highest total phenolic content is observed in Evie at H5 (2.97±1.03 mg/g DW) while the lowest in Rebecca at H1 (1.47±0.96 mg/g DW). The antioxidant capacity of Evie at H5 was higher (14.40±2.21 µmol TE/ g DW) among other cultivars and the lowest observed in Aurora at H3 (5.69±0.34 µmol TE/ g DW). Furthermore, Larissa H5 shows the greatest inhibition of carbohydrate-hydrolyzing enzymes including alpha-glucosidase and alpha-amylase. In conclusion Larissa, at H5 demonstrated highest polyphenol composition and antidiabetic properties.Keywords: anthocyanin, cyanidin-3-O-glucoside, haskap, type 2 diabetes
Procedia PDF Downloads 459371 Usability Evaluation of Four Big e-Commerce Websites in Indonesia
Authors: Harry B. Santoso, Lia Sadita, Firlia Sandyta, Musa Alfatih, Nove Spalo, Nu'man Naufal, Nuryahya P. Utomo, Putu A. Paramatha, Rezka Aufar Leonandya, Tommy Anugrah, Aulia Chairunisa, M. Fadly Uzzaki, Riandy D. Banimahendra
Abstract:
The numbers of Internet active users in Indonesia reach out over 88.1 million, where 48% of them are daily active users. Seeing these numbers, it is the best opportunity for IT companies to grow their business, especially e-Commerce. In fact, the growth of e-Commerce companies in Indonesia is proportional with internet daily active users. This phenomenon shows that competition happening among the e-Commerce companies is raising high. It triggers many e-Commerce companies to improve their services. The authors hypothesized that one of the best ways to improve the services is by improving their usability. So, the authors had done a study to evaluate and find out ways to improve usability of those e-Commerce websites. The authors chose four e-Commerce websites which each of them has different business focus and profiles. Each company is labeled as A, B, C, and D. Company A is a fashion-based e-Commerce services with two-million desktop visits Indonesia. Company B is an international online shopping mall for everyday appliances with 48,3-million desktop visits in Indonesia. Company C is a localized online shopping mall with 3,2-million desktop visits in Indonesia. Company D is an online shopping mall with one-million desktop visits in Indonesia. Writers used popular web traffic analytics platform to gain the numbers. There are some approaches to evaluate the usability of e-Commerce websites. In this study, the authors used usability testing method supported by the User Experience Questionnaire. This method involved the user in interacting directly with the services provided by the e-Commerce company. This study was conducted within two months including preparation, data collection, data analysis, and reporting. We used a pair of computers, a screen-capture video application named Smartboard, and User Experience Questionnaire. A team was built to conduct this study. They consisted of one supervisor, two assistants, four facilitators and four observers. For each e-Commerce, three users aged 17-25 years old were invited to do five task scenarios. Data collected in this study included demographic information of the users, usability testing results, and users’ responses to the questionnaire. Some findings were revealed from the usability testing and the questionnaire. Compared to the other three companies, Company D had the least score for the experiences. One of the most painful issues figured out by the authors from the evaluation was most users claimed feeling confused by user interfaces in these e-Commerce websites. We believe that this study will help e-Commerce companies to improve their services and business in the future.Keywords: e-commerce, evaluation, usability testing, user experience
Procedia PDF Downloads 317370 Experimental Study on Heat and Mass Transfer of Humidifier for Fuel Cell
Authors: You-Kai Jhang, Yang-Cheng Lu
Abstract:
Major contributions of this study are threefold: designing a new model of planar-membrane humidifier for Proton Exchange Membrane Fuel Cell (PEMFC), an index to measure the Effectiveness (εT) of that humidifier, and an air compressor system to replicate related planar-membrane humidifier experiments. PEMFC as a kind of renewable energy has become more and more important in recent years due to its reliability and durability. To maintain the efficiency of the fuel cell, the membrane of PEMFC need to be controlled in a good hydration condition. How to maintain proper membrane humidity is one of the key issues to optimize PEMFC. We developed new humidifier to recycle water vapor from cathode air outlet so as to keep the moisture content of cathode air inlet in a PEMFC. By measuring parameters such as dry side air outlet dew point temperature, dry side air inlet temperature and humidity, wet side air inlet temperature and humidity, and differential pressure between dry side and wet side, we calculated indices obtained by dew point approach temperature (DPAT), water flux (J), water recovery ratio (WRR), effectiveness (εT), and differential pressure (ΔP). We discussed six topics including sealing effect, flow rate effect, flow direction effect, channel effect, temperature effect, and humidity effect by using these indices. Gas cylinders are used as sources of air supply in many studies of humidifiers. Gas cylinder depletes quickly during experiment at 1kW air flow rate, and it causes replication difficult. In order to ensure high stable air quality and better replication of experimental data, this study designs an air supply system to overcome this difficulty. The experimental result shows that the best rate of pressure loss of humidifier is 0.133×10³ Pa(g)/min at the torque of 25 (N.m). The best humidifier performance ranges from 30-40 (LPM) of air flow rates. The counter flow configured humidifies moisturizes the dry side inlet air more effectively than the parallel flow humidifier. From the performance measurements of the channel plates various rib widths studied in this study, it is found that the narrower the rib width is, the more the performance of humidifier improves. Raising channel width in same hydraulic diameter (Dh ) will obtain higher εT and lower ΔP. Moreover, increasing the dry side air inlet temperature or humidity will lead to lower εT. In addition, when the dry side air inlet temperature exceeds 50°C, the effect becomes even more obvious.Keywords: PEM fuel cell, water management, membrane humidifier, heat and mass transfer, humidifier performance
Procedia PDF Downloads 176369 CFD Modeling of Stripper Ash Cooler of Circulating Fluidized Bed
Authors: Ravi Inder Singh
Abstract:
Due to high heat transfer rate, high carbon utilizing efficiency, fuel flexibilities and other advantages numerous circulating fluidized bed boilers have grown up in India in last decade. Many companies like BHEL, ISGEC, Thermax, Cethar Limited, Enmas GB Power Systems Projects Limited are making CFBC and installing the units throughout the India. Due to complexity many problems exists in CFBC units and only few have been reported. Agglomeration i.e clinker formation in riser, loop seal leg and stripper ash coolers is one of problem industry is facing. Proper documentation is rarely found in the literature. Circulating fluidized bed (CFB) boiler bottom ash contains large amounts of physical heat. While the boiler combusts the low-calorie fuel, the ash content is normally more than 40% and the physical heat loss is approximately 3% if the bottom ash is discharged without cooling. In addition, the red-hot bottom ash is bad for mechanized handling and transportation, as the upper limit temperature of the ash handling machinery is 200 °C. Therefore, a bottom ash cooler (BAC) is often used to treat the high temperature bottom ash to reclaim heat, and to have the ash easily handled and transported. As a key auxiliary device of CFB boilers, the BAC has a direct influence on the secure and economic operation of the boiler. There are many kinds of BACs equipped for large-scale CFB boilers with the continuous development and improvement of the CFB boiler. These ash coolers are water cooled ash cooling screw, rolling-cylinder ash cooler (RAC), fluidized bed ash cooler (FBAC).In this study prototype of a novel stripper ash cooler is studied. The Circulating Fluidized bed Ash Coolers (CFBAC) combined the major technical features of spouted bed and bubbling bed, and could achieve the selective discharge on the bottom ash. The novel stripper ash cooler is bubbling bed and it is visible cold test rig. The reason for choosing cold test is that high temperature is difficult to maintain and create in laboratory level. The aim of study to know the flow pattern inside the stripper ash cooler. The cold rig prototype is similar to stripper ash cooler used industry and it was made after scaling down to some parameter. The performance of a fluidized bed ash cooler is studied using a cold experiment bench. The air flow rate, particle size of the solids and air distributor type are considered to be the key parameters of the operation of a fluidized bed ash cooler (FBAC) are studied in this.Keywords: CFD, Eulerian-Eulerian, Eulerian-Lagraingian model, parallel simulations
Procedia PDF Downloads 510368 Utilization of Chicken Skin Based Products as Fat Replacers for Improving the Nutritional Quality, Physico-Chemical Characteristics and Sensory Attributes of Beef Fresh Sausage
Authors: Hussein M. H. Mohamed, Hamdy M. B. Zaki
Abstract:
Fresh sausage is one of the cheapest and delicious meat products that are gaining popularity all over the world. It is considered as a practice of adding value to low-value meat cuts of high fat and connective tissue contents. One of the most important characteristics of fresh sausage is the distinctive marbling appearance between lean and fatty portions, which can be achieved by using animal fat. For achieving the marbling appearance of fresh sausage, a lager amount of fat needs to be used. The use of animal fat may represent a health concern due to its content of saturated fatty acids and trans-fats, which increase the risk of heart diseases. There is a need for reducing the fat content of fresh sausage to obtain a healthy product. However, fat is responsible for the texture, flavor, and juiciness of the product. Therefore, developing reduced-fat products is a challenging process. The main objectives of the current study were to incorporate chicken skin based products (chicken skin emulsion, gelatinized chicken skin, and gelatinized chicken skin emulsion) during the formulation of fresh sausage as fat replacers and to study the effect of these products on the nutritional quality, physicochemical properties, and sensory attributes of the processed product. Three fresh sausage formulae were prepared using chicken skin based fat replacers (chicken skin emulsion, gelatinized chicken skin, and gelatinized chicken skin emulsion) beside one formula prepared using mesenteric beef fat as a control. The proximate composition, fatty acid profiles, Physico-chemical characteristics, and sensory attributes of all formulas were assessed. The results revealed that the use of chicken skin based fat replacers resulted in significant (P < 0.05) reduction of fat contents from 17.67 % in beef mesenteric fat formulated sausage to 5.77, 8.05 and 8.46 in chicken skin emulsion, gelatinized chicken skin, and gelatinized chicken skin emulsion formulated sausages, respectively. Significant reduction in the saturated fatty acid contents and a significant increase in mono-unsaturated, poly-unsaturated, and omega-3 fatty acids have been observed in all formulae processed with chicken skin based fat replacers. Moreover, significant improvements in the physico-chemical characteristics and non-significant changes in the sensory attributes have been obtained. From the obtained results, it can be concluded that the chicken skin based products can be used safely to improve the nutritional quality and physico chemical properties of beef fresh sausages without changing the sensory attributes of the product. This study may encourage meat processors to utilize chicken skin based fat replacers for the production of high quality and healthy beef fresh sausages.Keywords: chicken skin emulsion, fresh sausage, gelatinized chicken skin, gelatinized chicken skin emulsion
Procedia PDF Downloads 130367 Transformative Measures in Chemical and Petrochemical Industry Through Agile Principles and Industry 4.0 Technologies
Authors: Bahman Ghorashi
Abstract:
The immense awareness of the global climate change has compelled traditional fossil fuel companies to develop strategies to reduce their carbon footprint and simultaneously consider the production of various sources of clean energy in order to mitigate the environmental impact of their operations. Similarly, supply chain issues, the scarcity of certain raw materials, energy costs as well as market needs, and changing consumer expectations have forced the traditional chemical industry to reexamine their time-honored modes of operation. This study examines how such transformative change might occur through the applications of agile principles as well as industry 4.0 technologies. Clearly, such a transformation is complex, costly, and requires a total commitment on the part of the top leadership and the entire management structure. Factors that need to be considered include organizational speed of change, a restructuring that would lend itself toward collaboration and the selling of solutions to customers’ problems, rather than just products, integrating ‘along’ as well as ‘across’ value chains, mastering change and uncertainty as well as a recognition of the importance of concept-to-cash time, i.e., the velocity of introducing new products to market, and the leveraging of people and information. At the same time, parallel to implementing such major shifts in the ethos, and the fabric of the organization, the change leaders should remain mindful of the companies’ DNA while incorporating the necessary DNA defying shifts. Furthermore, such strategic maneuvers should inevitably incorporate the managing of the upstream and downstream operations, harnessing future opportunities, preparing and training the workforce, implementing faster decision making and quick adaptation to change, managing accelerated response times, as well as forming autonomous and cross-functional teams. Moreover, the leaders should establish the balance between high-value solutions versus high-margin products, fully implement digitization of operations and, when appropriate, incorporate the latest relevant technologies, such as: AI, IIoT, ML, and immersive technologies. This study presents a summary of the agile principles and the relevant technologies and draws lessons from some of the best practices that are already implemented within the chemical industry in order to establish a roadmap to agility. Finally, the critical role of educational institutions in preparing the future workforce for Industry 4.0 is addressed.Keywords: agile principles, immersive technologies, industry 4.0, workforce preparation
Procedia PDF Downloads 106366 An Energy and Economic Comparison of Solar Thermal Collectors for Domestic Hot Water Applications
Authors: F. Ghani, T. S. O’Donovan
Abstract:
Today, the global solar thermal market is dominated by two collector types; the flat plate and evacuated tube collector. With regards to the number of installations worldwide, the evacuated tube collector is the dominant variant primarily due to the Chinese market but the flat plate collector dominates both the Australian and European markets. The market share of the evacuated tube collector is, however, growing in Australia due to a common belief that this collector type is ‘more efficient’ and, therefore, the better choice for hot water applications. In this study, we investigate this issue further to assess the validity of this statement. This was achieved by methodically comparing the performance and economics of several solar thermal systems comprising of; a low-performance flat plate collector, a high-performance flat collector, and an evacuated tube collector coupled with a storage tank and pump. All systems were simulated using the commercial software package Polysun for four climate zones in Australia to take into account different weather profiles in the study and subjected to a thermal load equivalent to a household comprising of four people. Our study revealed that the energy savings and payback periods varied significantly for systems operating under specific environmental conditions. Solar fractions ranged between 58 and 100 per cent, while payback periods range between 3.8 and 10.1 years. Although the evacuated tube collector was found to operate with a marginally higher thermal efficiency over the selective surface flat plate collector due to reduced ambient heat loss, the high-performance flat plate collector outperformed the evacuated tube collector on thermal yield. This result was obtained as the flat plate collector possesses a significantly higher absorber to gross collector area ratio over the evacuated tube collector. Furthermore, it was found for Australian regions operating with a high average solar radiation intensity and ambient temperature, the lower performance collector is the preferred choice due to favorable economics and reduced stagnation temperature. Our study has provided additional insight into the thermal performance and economics of the two prevalent solar thermal collectors currently available. A computational investigation has been carried out specifically for the Australian climate due to its geographic size and significant variation in weather. For domestic hot water applications were fluid temperatures between 50 and 60 degrees Celsius are sought, the flat plate collector is both technically and economically favorable over the evacuated tube collector. This research will be useful to system design engineers, solar thermal manufacturers, and those involved in policy to encourage the implementation of solar thermal systems into the hot water market.Keywords: solar thermal, energy analysis, flat plate, evacuated tube, collector performance
Procedia PDF Downloads 210365 Electrodeposition of Silicon Nanoparticles Using Ionic Liquid for Energy Storage Application
Authors: Anjali Vanpariya, Priyanka Marathey, Sakshum Khanna, Roma Patel, Indrajit Mukhopadhyay
Abstract:
Silicon (Si) is a promising negative electrode material for lithium-ion batteries (LiBs) due to its low cost, non-toxicity, and a high theoretical capacity of 4200 mAhg⁻¹. The primary challenge of the application of Si-based LiBs is large volume expansion (~ 300%) during the charge-discharge process. Incorporation of graphene, carbon nanotubes (CNTs), morphological control, and nanoparticles was utilized as effective strategies to tackle volume expansion issues. However, molten salt methods can resolve the issue, but high-temperature requirement limits its application. For sustainable and practical approach, room temperature (RT) based methods are essentially required. Use of ionic liquids (ILs) for electrodeposition of Si nanostructures can possibly resolve the issue of temperature as well as greener media. In this work, electrodeposition of Si nanoparticles on gold substrate was successfully carried out in the presence of ILs media, 1-butyl-3-methylimidazolium-bis (trifluoromethyl sulfonyl) imide (BMImTf₂N) at room temperature. Cyclic voltammetry (CV) suggests the sequential reduction of Si⁴⁺ to Si²⁺ and then Si nanoparticles (SiNs). The structure and morphology of the electrodeposited SiNs were investigated by FE-SEM and observed interconnected Si nanoparticles of average particle size ⁓100-200 nm. XRD and XPS data confirm the deposition of Si on Au (111). The first discharge-charge capacity of Si anode material has been found to be 1857 and 422 mAhg⁻¹, respectively, at current density 7.8 Ag⁻¹. The irreversible capacity of the first discharge-charge process can be attributed to the solid electrolyte interface (SEI) formation via electrolyte decomposition, and trapped Li⁺ inserted into the inner pores of Si. Pulverization of SiNs results in the creation of a new active site, which facilitates the formation of new SEI in the subsequent cycles leading to fading in a specific capacity. After 20 cycles, charge-discharge profiles have been stabilized, and a reversible capacity of 150 mAhg⁻¹ is retained. Electrochemical impedance spectroscopy (EIS) data shows the decrease in Rct value from 94.7 to 47.6 kΩ after 50 cycles of charge-discharge, which demonstrates the improvements of the interfacial charge transfer kinetics. The decrease in the Warburg impedance after 50 cycles of charge-discharge measurements indicates facile diffusion in fragmented and smaller Si nanoparticles. In summary, Si nanoparticles deposited on gold substrate using ILs as media and characterized well with different analytical techniques. Synthesized material was successfully utilized for LiBs application, which is well supported by CV and EIS data.Keywords: silicon nanoparticles, ionic liquid, electrodeposition, cyclic voltammetry, Li-ion battery
Procedia PDF Downloads 125364 The Emergence of Memory at the Nanoscale
Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann
Abstract:
Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.Keywords: memories, memdevices, memristors, nonequilibrium states
Procedia PDF Downloads 97