Search results for: maximum power point tracking
1189 Sustainable Development Change within Our Environs
Authors: Akinwale Adeyinka
Abstract:
Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. Based on news on world population record, over 6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future.To say the fact, human civilization is now the dominant cause of change in the global environment. Now that our relationship to the earth has change so utterly, we have to see that change and understand its implication. These are actually 2 aspects to the challenges which we should believe. The first is to realize that our power to harm the earth can indeed have global and even permanent effects. Second is to realize that the only way to understand our new role as a co-architect of nature is to see ourselves as part of a complex system that does operate according to the same simple rules of cause and effect we are used to. So understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because we believe Sustainable Development Is a matter of reconciling respect for the environment, social equity and economic profitability. Also, we strongly believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why we should always have it at the heart of our business that the environmental problem is not our effect on the environment so much as our relationship with the environment. We should always think of being environmental friendly in our operation.Keywords: Stratospheric ozone depletion ion , Climate Change, global warming, social equity and economic profitability
Procedia PDF Downloads 3401188 Endometrial Biopsy Curettage vs Endometrial Aspiration: Better Modality in Female Genital Tuberculosis
Authors: Rupali Bhatia, Deepthi Nair, Geetika Khanna, Seema Singhal
Abstract:
Introduction: Genital tract tuberculosis is a chronic disease (caused by reactivation of organisms from systemic distribution of Mycobacterium tuberculosis) that often presents with low grade symptoms and non-specific complaints. Patients with genital tuberculosis are usually young women seeking workup and treatment for infertility. Infertility is the commonest presentation due to involvement of the fallopian tubes, endometrium and ovarian damage with poor ovarian volume and reserve. The diagnosis of genital tuberculosis is difficult because of the fact that it is a silent invader of genital tract. Since tissue cannot be obtained from fallopian tubes, the diagnosis is made by isolation of bacilli from endometrial tissue obtained by endometrial biopsy curettage and/or aspiration. Problems are associated with sampling technique as well as diagnostic modality due to lack of adequate sample volumes and the segregation of the sample for various diagnostic tests resulting in non-uniform distribution of microorganisms. Moreover, lack of an efficient sampling technique universally applicable for all specific diagnostic tests contributes to the diagnostic challenges. Endometrial sampling plays a key role in accurate diagnosis of female genital tuberculosis. It may be done by 2 methods viz. endometrial curettage and endometrial aspiration. Both endometrial curettage and aspirate have their own limitations as curettage picks up strip of the endometrium from one of the walls of the uterine cavity including tubal osteal areas whereas aspirate obtains total tissue with exfoliated cells present in the secretory fluid of the endometrial cavity. Further, sparse and uneven distribution of the bacilli remains a major factor contributing to the limitations of the techniques. The sample that is obtained by either technique is subjected to histopathological examination, AFB staining, culture and PCR. Aim: Comparison of the sampling techniques viz. endometrial biopsy curettage and endometrial aspiration using different laboratory methods of histopathology, cytology, microbiology and molecular biology. Method: In a hospital based observational study, 75 Indian females suspected of genital tuberculosis were selected on the basis of inclusion criteria. The women underwent endometrial tissue sampling using Novaks biopsy curette and Karmans cannula. One part of the specimen obtained was sent in formalin solution for histopathological testing and another part was sent in normal saline for acid fast bacilli smear, culture and polymerase chain reaction. The results so obtained were correlated using coefficient of correlation and chi square test. Result: Concordance of results showed moderate agreement between both the sampling techniques. Among HPE, AFB and PCR, maximum sensitivity was observed for PCR, though the specificity was not as high as other techniques. Conclusion: Statistically no significant difference was observed between the results obtained by the two sampling techniques. Therefore, one may use either EA or EB to obtain endometrial samples and avoid multiple sampling as both the techniques are equally efficient in diagnosing genital tuberculosis by HPE, AFB, culture or PCR.Keywords: acid fast bacilli (AFB), histopatholgy examination (HPE), polymerase chain reaction (PCR), endometrial biopsy curettage
Procedia PDF Downloads 3291187 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits
Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta
Abstract:
We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat
Procedia PDF Downloads 1841186 Recognition of Spelling Problems during the Text in Progress: A Case Study on the Comments Made by Portuguese Students Newly Literate
Authors: E. Calil, L. A. Pereira
Abstract:
The acquisition of orthography is a complex process, involving both lexical and grammatical questions. This learning occurs simultaneously with the domain of multiple textual aspects (e.g.: graphs, punctuation, etc.). However, most of the research on orthographic acquisition focus on this acquisition from an autonomous point of view, separated from the process of textual production. This means that their object of analysis is the production of words selected by the researcher or the requested sentences in an experimental and controlled setting. In addition, the analysis of the Spelling Problems (SP) are identified by the researcher on the sheet of paper. Considering the perspective of Textual Genetics, from an enunciative approach, this study will discuss the SPs recognized by dyads of newly literate students, while they are writing a text collaboratively. Six proposals of textual production were registered, requested by a 2nd year teacher of a Portuguese Primary School between January and March 2015. In our case study we discuss the SPs recognized by the dyad B and L (7 years old). We adopted as a methodological tool the Ramos System audiovisual record. This system allows real-time capture of the text in process and of the face-to-face dialogue between both students and their teacher, and also captures the body movements and facial expressions of the participants during textual production proposals in the classroom. In these ecological conditions of multimodal registration of collaborative writing, we could identify the emergence of SP in two dimensions: i. In the product (finished text): SP identification without recursive graphic marks (without erasures) and the identification of SPs with erasures, indicating the recognition of SP by the student; ii. In the process (text in progress): identification of comments made by students about recognized SPs. Given this, we’ve analyzed the comments on identified SPs during the text in progress. These comments characterize a type of reformulation referred to as Commented Oral Erasure (COE). The COE has two enunciative forms: Simple Comment (SC) such as ' 'X' is written with 'Y' '; or Unfolded Comment (UC), such as ' 'X' is written with 'Y' because...'. The spelling COE may also occur before or during the SP (Early Spelling Recognition - ESR) or after the SP has been entered (Later Spelling Recognition - LSR). There were 631 words entered in the 6 stories written by the B-L dyad, 145 of them containing some type of SP. During the text in progress, the students recognized orally 174 SP, 46 of which were identified in advance (ESRs) and 128 were identified later (LSPs). If we consider that the 88 erasure SPs in the product indicate some form of SP recognition, we can observe that there were twice as many SPs recognized orally. The ESR was characterized by SC when students asked their colleague or teacher how to spell a given word. The LSR presented predominantly UC, verbalizing meta-orthographic arguments, mostly made by L. These results indicate that writing in dyad is an important didactic strategy for the promotion of metalinguistic reflection, favoring the learning of spelling.Keywords: collaborative writing, erasure, learning, metalinguistic awareness, spelling, text production
Procedia PDF Downloads 1661185 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance
Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta
Abstract:
Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.Keywords: glass plates, human impact test, modal test, plate boundary conditions
Procedia PDF Downloads 3121184 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures
Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.
Abstract:
Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.Keywords: CFD, reacting flow, DDT, gas explosion
Procedia PDF Downloads 941183 Signal Processing Techniques for Adaptive Beamforming with Robustness
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.Keywords: adaptive beamforming, robustness, signal blocking, steering angle error
Procedia PDF Downloads 1281182 The Lighthouse Project: Recent Initiatives to Navigate Australian Families Safely Through Parental Separation
Authors: Kathryn McMillan
Abstract:
A recent study of 8500 adult Australians aged 16 and over revealed 62% had experienced childhood maltreatment. In response to multiple recommendations by bodies such as the Australian Law Reform Commission, parliamentary reports and stakeholder input, a number of key initiatives have been developed to grapple with the difficulties of a federal-state system and to screen and triage high-risk families navigating their way through the court system. The Lighthouse Project (LHP) is a world-first initiative of the Federal Circuit and Family Courts in Australia (FCFOCA) to screen family law litigants for major risk factors, including family violence, child abuse, alcohol or substance abuse and mental ill-health at the point of filing in all applications that seek parenting orders. It commenced on 7 December 2020 on a pilot basis but has now been expanded to 15 registries across the country. A specialist risk screen, Family DOORS, Triage has been developed – focused on improving the safety and wellbeing of families involved in the family law system safety planning and service referral, and ¬ differentiated case management based on risk level, with the Evatt List specifically designed to manage the highest risk cases. Early signs are that this approach is meeting the needs of families with multiple risks moving through the Court system. Before the LHP, there was no data available about the prevalence of risk factors experienced by litigants entering the family courts and it was often assumed that it was the litigation process that was fueling family violence and other risks such as suicidality. Data from the 2022 FCFCOA annual report indicated that in parenting proceedings, 70% alleged a child had been or was at risk of abuse, 80% alleged a party had experienced Family Violence, 74 % of children had been exposed to Family Violence, 53% alleged through substance misuse by party children had caused or was at risk of causing harm to children and 58% of matters allege mental health issues of a party had caused or placed a child at risk of harm. Those figures reveal the significant overlap between child protection and family violence, both of which are under the responsibility of state and territory governments. Since 2020, a further key initiative has been the co-location of child protection and police officials amongst a number of registries of the FCFOCA. The ability to access in a time-effective way details of family violence or child protection orders, weapons licenses, criminal convictions or proceedings is key to managing issues across the state and federal divide. It ensures a more cohesive and effective response to family law, family violence and child protection systems.Keywords: child protection, family violence, parenting, risk screening, triage.
Procedia PDF Downloads 831181 Pharmacokinetics and Safety of Pacritinib in Patients with Hepatic Impairment and Healthy Volunteers
Authors: Suliman Al-Fayoumi, Sherri Amberg, Huafeng Zhou, Jack W. Singer, James P. Dean
Abstract:
Pacritinib is an oral kinase inhibitor with specificity for JAK2, FLT3, IRAK1, and CSF1R. In clinical studies, pacritinib was well tolerated with clinical activity in patients with myelofibrosis. The most frequent adverse events (AEs) observed with pacritinib are gastrointestinal (diarrhea, nausea, and vomiting; mostly grade 1-2 in severity) and typically resolve within 2 weeks. A human ADME mass balance study demonstrated that pacritinib is predominantly cleared via hepatic metabolism and biliary excretion (>85% of administered dose). The major hepatic metabolite identified, M1, is not thought to materially contribute to the pharmacological activity of pacritinib. Hepatic diseases are known to impair hepatic blood flow, drug-metabolizing enzymes, and biliary transport systems and may affect drug absorption, disposition, efficacy, and toxicity. This phase 1 study evaluated the pharmacokinetics (PK) and safety of pacritinib and the M1 metabolite in study subjects with mild, moderate, or severe hepatic impairment (HI) and matched healthy subjects with normal liver function to determine if pacritinib dosage adjustments are necessary for patients with varying degrees of hepatic insufficiency. Study participants (aged 18-85 y) were enrolled into 4 groups based on their degree of HI as defined by Child-Pugh Clinical Assessment Score: mild (n=8), moderate (n=8), severe (n=4), and healthy volunteers (n=8) matched for age, BMI, and sex. Individuals with concomitant renal dysfunction or progressive liver disease were excluded. A single 400 mg dose of pacritinib was administered to all participants. Blood samples were obtained for PK evaluation predose and at multiple time points postdose through 168 h. Key PK parameters evaluated included maximum plasma concentration (Cmax), time to Cmax (Tmax), area under the plasma concentration time curve (AUC) from hour zero to last measurable concentration (AUC0-t), AUC extrapolated to infinity (AUC0-∞), and apparent terminal elimination half-life (t1/2). Following treatment, pacritinib was quantifiable for all study participants at 1 h through 168 h postdose. Systemic pacritinib exposure was similar between healthy volunteers and individuals with mild HI. However, there was a significant difference between those with moderate and severe HI and healthy volunteers with respect to peak concentration (Cmax) and plasma exposure (AUC0-t, AUC0-∞). Mean Cmax decreased by 47% and 57% respectively in participants with moderate and severe HI vs matched healthy volunteers. Similarly, mean AUC0-t decreased by 36% and 45% and mean AUC0-∞ decreased by 46% and 48%, respectively in individuals with moderate and severe HI vs healthy volunteers. Mean t1/2 ranged from 51.5 to 74.9 h across all groups. The variability on exposure ranged from 17.8% to 51.8% across all groups. Systemic exposure of M1 was also significantly decreased in study participants with moderate or severe HI vs. healthy participants and individuals with mild HI. These changes were not significantly dissimilar from the inter-patient variability in these parameters observed in healthy volunteers. All AEs were grade 1-2 in severity. Diarrhea and headache were the only AEs reported in >1 participant (n=4 each). Based on these observations, it is unlikely that dosage adjustments would be warranted in patients with mild, moderate, or severe HI treated with pacritinib.Keywords: pacritinib, myelofibrosis, hepatic impairment, pharmacokinetics
Procedia PDF Downloads 3001180 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars
Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic
Abstract:
Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing
Procedia PDF Downloads 3621179 Model Tests on Geogrid-Reinforced Sand-Filled Embankments with a Cover Layer under Cyclic Loading
Authors: Ma Yuan, Zhang Mengxi, Akbar Javadi, Chen Longqing
Abstract:
The structure of sand-filled embankment with cover layer is treated with tipping clay modified with lime on the outside of the packing, and the geotextile is placed between the stuffing and the clay. The packing is usually river sand, and the improved clay protects the sand core against rainwater erosion. The sand-filled embankment with cover layer has practical problems such as high filling embankment, construction restriction, and steep slope. The reinforcement can be applied to the sand-filled embankment with cover layer to solve the complicated problems such as irregular settlement caused by poor stability of the embankment. At present, the research on the sand-filled embankment with cover layer mainly focuses on the sand properties, construction technology, and slope stability, and there are few studies in the experimental field, the deformation characteristics and stability of reinforced sand-filled embankment need further study. In addition, experimental research is relatively rare when the cyclic load is considered in tests. A subgrade structure of geogrid-reinforced sand-filled embankment with cover layer was proposed. The mechanical characteristics, the deformation properties, reinforced behavior and the ultimate bearing capacity of the embankment structure under cyclic loading were studied. For this structure, the geogrids in the sand and the tipping soil are through the geotextile which is arranged in sections continuously so that the geogrids can cross horizontally. Then, the Unsaturated/saturated Soil Triaxial Test System of Geotechnical Consulting and Testing Systems (GCTS), USA was modified to form the loading device of this test, and strain collector was used to measuring deformation and earth pressure of the embankment. A series of cyclic loading model tests were conducted on the geogrid-reinforced sand-filled embankment with a cover layer under a different number of reinforcement layers, the length of reinforcement and thickness of the cover layer. The settlement of the embankment, the normal cumulative deformation of the slope and the earth pressure were studied under different conditions. Besides cyclic loading model tests, model experiments of embankment subjected cyclic-static loading was carried out to analyze ultimate bearing capacity with different loading. The experiment results showed that the vertical cumulative settlement under long-term cyclic loading increases with the decrease of the number of reinforcement layers, length of the reinforcement arrangement and thickness of the tipping soil. Meanwhile, these three factors also have an influence on the decrease of the normal deformation of the embankment slope. The earth pressure around the loading point is significantly affected by putting geogrid in a model embankment. After cyclic loading, the decline of ultimate bearing capacity of the reinforced embankment can be effectively reduced, which is contrary to the unreinforced embankment.Keywords: cyclic load; geogrid; reinforcement behavior; cumulative deformation; earth pressure
Procedia PDF Downloads 1241178 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 951177 Spexin and Fetuin A in Morbid Obese Children
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Spexin, expressed in central nervous system, has attracted much interest in feeding behavior, obesity, diabetes, energy metabolism and cardiovascular functions. Fetuin A is known as negative acute phase reactant synthesized in the liver. So far, it has become a major concern of many studies in numerous clinical states. The relationship between the concentrations of spexin as well as fetuin A and the risk for cardiovascular diseases (CVDs) were also investigated. Eosinophils, suggested to be associated with the development of CVDs, are introduced as early indicators of cardiometabolic complications. Patients with elevated platelet count, associated with hypercoagulable state in the body, are also more liable to CVDs. In this study, the aim is to examine the profiles of spexin and fetuin A concomitant with the course of variations detected in eosinophil as well as platelet counts in morbid obese children. Thirty-four children with normal-body mass index (N-BMI) and fifty-one morbid obese (MO) children participated in the study. Written-informed consent forms were obtained prior to the study. Institutional ethics committee approved the study protocol. Age- and sex-adjusted BMI percentile tables prepared by World Health Organization were used to classify healthy and obese children. Mean age ± SEM of the children were 9.3 ± 0.6 years and 10.7 ± 0.5 years in N-BMI and MO groups, respectively. Anthropometric measurements of the children were taken. Body mass index values were calculated from weight and height values. Blood samples were obtained after an overnight fasting. Routine hematologic and biochemical tests were performed. Within this context, fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) concentrations were measured. Homeostatic model assessment for insulin resistance (HOMA-IR) values were calculated. Spexin and fetuin A levels were determined by enzyme-linked immunosorbent assay. Data were evaluated from the statistical point of view. Statistically significant differences were found between groups in terms of BMI, fat mass index, INS, HOMA-IR and HDL-C. In MO group, all parameters increased as HDL-C decreased. Elevated concentrations in MO group were detected in eosinophils (p<0.05) and platelets (p>0.05). Fetuin A levels decreased in MO group (p>0.05). However, decrease was statistically significant in spexin levels for this group (p<0.05). In conclusion, these results have suggested that increases in eosinophils and platelets exhibit behavior as cardiovascular risk factors. Decreased fetuin A behaved as a risk factor suitable to increased risk for cardiovascular problems associated with the severity of obesity. Along with increased eosinophils, increased platelets and decreased fetuin A, decreased spexin was the parameter, which reflects best its possible participation in the early development of CVD risk in MO children.Keywords: cardiovascular diseases , eosinophils , fetuin A , pediatric morbid obesity , platelets , spexin
Procedia PDF Downloads 1961176 Hygro-Thermal Modelling of Timber Decks
Authors: Stefania Fortino, Petr Hradil, Timo Avikainen
Abstract:
Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM
Procedia PDF Downloads 1771175 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design
Procedia PDF Downloads 2151174 Understanding the Origins of Pesticides Metabolites in Natural Waters through the Land Use, Hydroclimatic Conditions and Water Quality
Authors: Alexis Grandcoin, Stephanie Piel, Estelle Baures
Abstract:
Brittany (France) is an agricultural region, where emerging pollutants are highly at risk to reach water bodies. Among them, pesticides metabolites are frequently detected in surface waters. The Vilaine watershed (11 000 km²) is of great interest, as a large drinking water treatment plant (100 000 m³/day) is located at the extreme downstream of it. This study aims to provide an evaluation of the pesticides metabolites pollution in the Vilaine watershed, and an understanding of their availability, in order to protect the water resource. Hydroclimatic conditions, land use, and water quality parameters controlling metabolites availability are emphasized. Later this knowledge will be used to understand the favoring conditions resulting in metabolites export towards surface water. 19 sampling points have been strategically chosen along the 220 km of the Vilaine river and its 3 main influents. Furthermore, the intakes of two drinking water plants have been sampled, one is located at the extreme downstream of the Vilaine river and the other is the riparian groundwater under the Vilaine river. 5 sampling campaigns with various hydroclimatic conditions have been carried out. Water quality parameters and hydroclimatic conditions have been measured. 15 environmentally relevant pesticides and metabolites have been analyzed. Also, these compounds are recalcitrant to classic water treatment that is why they have been selected. An evaluation of the watershed contamination has been done in 2016-2017. First observations showed that aminomethylphosphonic acid (AMPA) and metolachlor ethanesulfonic acid (MESA) are the most detected compounds in surface waters samples with 100 % and 98 % frequency of detection respectively. They are the main pollutants of the watershed regardless of the hydroclimatic conditions. AMPA concentration in the river strongly increases downstream of Rennes agglomeration (220k inhabitants) and reaches a maximum of 2.3 µg/l in low waters conditions. Groundwater contains mainly MESA, Diuron and metazachlor ESA at concentrations close to limits of quantification (LOQ) (0.02 µg/L). Metolachlor, metazachlor and alachlor due to their fast degradation in soils were found in small amounts (LOQ – 0.2 µg/L). Conversely glyphosate was regularly found during warm and sunny periods up to 0.6 µg/L. Soil uses (agricultural cultures types, urban areas, forests, wastewater treatment plants implementation), water quality parameters, and hydroclimatic conditions have been correlated to pesticides and metabolites concentration in waters. Statistical treatments showed that chloroacetamides metabolites and AMPA behave differently regardless of the hydroclimatic conditions. Chloroacetamides are correlated to each other, to agricultural areas and to typical agricultural tracers as nitrates. They are present in waters the whole year, especially during rainy periods, suggesting important stocks in soils. Also Chloroacetamides are negatively correlated with AMPA, the different forms of phosphorus, and organic matter. AMPA is ubiquitous but strongly correlated with urban areas despite the recent French regulation, restricting glyphosate to agricultural and private uses. This work helps to predict and understand metabolites present in the water resource used to craft drinking water. As the studied metabolites are difficult to remove, this project will be completed by a water treatment part.Keywords: agricultural watershed, AMPA, metolachlor-ESA, water resource
Procedia PDF Downloads 1631173 A Fast Method for Graphene-Supported Pd-Co Nanostructures as Catalyst toward Ethanol Oxidation in Alkaline Media
Authors: Amir Shafiee Kisomi, Mehrdad Mofidi
Abstract:
Nowadays, fuel cells as a promising alternative for power source have been widely studied owing to their security, high energy density, low operation temperatures, renewable capability and low environmental pollutant emission. The nanoparticles of core-shell type could be widely described in a combination of a shell (outer layer material) and a core (inner material), and their characteristics are greatly conditional on dimensions and composition of the core and shell. In addition, the change in the constituting materials or the ratio of core to the shell can create their special noble characteristics. In this study, a fast technique for the fabrication of a Pd-Co/G/GCE modified electrode is offered. Thermal decomposition reaction of cobalt (II) formate salt over the surface of graphene/glassy carbon electrode (G/GCE) is utilized for the synthesis of Co nanoparticles. The nanoparticles of Pd-Co decorated on the graphene are created based on the following method: (1) Thermal decomposition reaction of cobalt (II) formate salt and (2) the galvanic replacement process Co by Pd2+. The physical and electrochemical performances of the as-prepared Pd-Co/G electrocatalyst are studied by Field Emission Scanning Electron Microscopy (FESEM), Energy Dispersive X-ray Spectroscopy (EDS), Cyclic Voltammetry (CV), and Chronoamperometry (CHA). Galvanic replacement method is utilized as a facile and spontaneous approach for growth of Pd nanostructures. The Pd-Co/G is used as an anode catalyst for ethanol oxidation in alkaline media. The Pd-Co/G not only delivered much higher current density (262.3 mAcm-2) compared to the Pd/C (32.1 mAcm-2) catalyst, but also demonstrated a negative shift of the onset oxidation potential (-0.480 vs -0.460 mV) in the forward sweep. Moreover, the novel Pd-Co/G electrocatalyst represents large electrochemically active surface area (ECSA), lower apparent activation energy (Ea), higher levels of durability and poisoning tolerance compared to the Pd/C catalyst. The paper demonstrates that the catalytic activity and stability of Pd-Co/G electrocatalyst are higher than those of the Pd/C electrocatalyst toward ethanol oxidation in alkaline media.Keywords: thermal decomposition, nanostructures, galvanic replacement, electrocatalyst, ethanol oxidation, alkaline media
Procedia PDF Downloads 1561172 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows
Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham
Abstract:
In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis
Procedia PDF Downloads 701171 Effect of Supplementation of Hay with Noug Seed Cake (Guizotia abyssinica), Wheat Bran and Their Mixtures on Feed Utilization, Digestiblity and Live Weight Change in Farta Sheep
Authors: Fentie Bishaw Wagayie
Abstract:
This study was carried out with the objective of studying the response of Farta sheep in feed intake and live weight change when fed on hay supplemented with noug seed cake (NSC), wheat bran (WB), and their mixtures. The digestibility trial of 7 days and 90 days of feeding trial was conducted using 25 intact male Farta sheep with a mean initial live weight of 16.83 ± 0.169 kg. The experimental animals were arranged randomly into five blocks based on the initial live weight, and the five treatments were assigned randomly to each animal in a block. Five dietary treatments used in the experiment comprised of grass hay fed ad libitum (T1), grass hay ad libitum + 300 g DM WB (T2), grass hay ad libitum + 300 g DM (67% WB: 33% NSC mixture) (T3), grass hay ad libitum + 300 g DM (67% NSC: 33% WB) (T4) and 300 g DM/ head/day NSC (T5). Common salt and water were offered ad libitum. The supplements were offered twice daily at 0800 and 1600 hours. The experimental sheep were kept in individual pens. Supplementation of NSC, WB, and their mixtures significantly increased (p < 0.01) the total dry matter (DM) (665.84-788 g/head/day) and (p < 0.001) crude protein (CP) intake. Unsupplemented sheep consumed significantly higher (p < 0.01) grass hay DM (540.5g/head/day) as compared to the supplemented treatments (365.8-488 g/h/d), except T2. Among supplemented sheep, T5 had significantly higher (p < 0.001) CP intake (99.98 g/head/day) than the others (85.52-90.2 g/head/day). Supplementation significantly improved (p < 0.001) the digestibility of CP (66.61-78.9%), but there was no significant effect (p > 0.05) on DM, OM, NDF, and ADF digestibility between supplemented and control treatments. Very low CP digestibility (11.55%) observed in the basal diet (grass hay) used in this study indicated that feeding sole grass hay could not provide nutrients even for the maintenance requirement of growing sheep. Significant final and daily live weight gain (p < 0.001) in the range of 70.11-82.44 g/head/day was observed in supplemented Farta sheep, but unsupplemented sheep lost weight by 9.11g/head/day. Numerically, among the supplemented treatments, sheep supplemented with a higher proportion of NSC in T4 (201 NSC + 99 g WB) gained more weight than the rest, though not statistically significant (p > 0.05). The absence of statistical difference in daily body weight gain between all supplemented sheep indicated that the supplementation of NSC, WB, and their mixtures had similar potential to provide nutrients. Generally, supplementation of NSC, WB, and their mixtures to the basal grass hay diet improved feed conversion ratio, total DM intake, CP intake, and CP digestibility, and it also improved the growth performance with a similar trend for all supplemented Farta sheep over the control group. Therefore, from a biological point of view, to attain the required level of slaughter body weight within a short period of the growing program, sheep producer can use all the supplement types depending upon their local availability, but in the order of priority, T4, T5, T3, and T2, respectively. However, based on partial budget analysis, supplementation of 300 g DM/head /day NSC (T5) could be recommended as profitable for producers with no capital limitation, whereas T4 supplementation (201 g NSC + 99 WB DM/day) is recommended when there is capital scarcity.Keywords: weight gain, supplement, Farta sheep, hay as basal diet
Procedia PDF Downloads 651170 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip
Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac
Abstract:
Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating
Procedia PDF Downloads 2201169 Cloning and Expression a Gene of β-Glucosidase from Penicillium echinulatum in Pichia pastoris
Authors: Amanda Gregorim Fernandes, Lorena Cardoso Cintra, Rosalia Santos Amorim Jesuino, Fabricia Paula De Faria, Marcio José Poças Fonseca
Abstract:
Bioethanol is one of the most promising biofuels and able to replace fossil fuels and reduce its different environmental impacts and can be generated from various agroindustrial waste. The Brazil is in first place in bioethanol production to be the largest producer of sugarcane. The bagasse sugarcane (SCB) has lignocellulose which is composed of three major components: cellulose, hemicellulose and lignin. Cellulose is a homopolymer of glucose units connected by glycosidic linkages. Among all species of Penicillium, Penicillium echinulatum has been the focus of attention because they produce high quantities of cellulase and the mutant strain 9A02S1 produces higher enzyme levels compared to the wild. Among the cellulases, the cellobiohydrolases enzymes are the main components of the cellulolytic system of fungi, and are also responsible for most of the potential hydrolytic in enzyme cocktails for the industrial processing of plant biomass and several cellobiohydrolases Penicillium had higher specific activity against cellulose compared to CBH I from Trichoderma reesei. This fact makes it an interesting pattern for higher yields in the enzymatic hydrolysis, and also they are important enzymes in the hydrolysis of crystalline regions of cellulose. Therefore, finding new and more active enzymes become necessary. Meanwhile, β-glycosidases act on soluble substrates and are highly dependent on cellobiohydrolases and endoglucanases action to provide the substrate in the hydrolysis of the biomass, but the cellobiohydrolases and endoglucanases are highly dependent β-glucosidases to maintain efficient hydrolysis. Thus, there is a need to understand the structure-function relationships that govern the catalytic activity of cellulolytic enzymes to elucidate its mechanism of action and optimize its potential as industrial biocatalysts. To evaluate the enzyme β-glucosidase of Penicillium echinulatum (PeBGL1) the gene was synthesized from the assembly sequence from a library in induction conditions and then the PeBGL1 gene was cloned in the vector pPICZαA and transformed into P. pastoris GS115. After processing, the producers of PeBGL1 were analyzed for enzyme activity and protein profile where a band of approximately 100 kDa was viewed. It was also carried out the zymogram. In partial characterization it was determined optimum temperature of 50°C and optimum pH of 6,5. In addition, to increase the secreted recombinant PeBGL1 production by Pichia pastoris, three parameters of P. pastoris culture medium were analysed: methanol, nitrogen source concentrations and the inoculum size. A 23 factorial design was effective in achieving the optimum condition. Altogether, these results point to the potential application of this P. echinulatum β-glucosidase in hydrolysis of cellulose for the production of bioethanol.Keywords: bioethanol, biotechnology, beta-glucosidase, penicillium echinulatum
Procedia PDF Downloads 2461168 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression
Authors: Anne M. Denton, Rahul Gomes, David W. Franzen
Abstract:
High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression
Procedia PDF Downloads 1321167 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System
Authors: Hassan Qandil
Abstract:
Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar
Procedia PDF Downloads 1571166 Fiberoptic Intubation Skills Training Improves Emergency Medicine Resident Comfort Using Modality
Authors: Nicholus M. Warstadt, Andres D. Mallipudi, Oluwadamilola Idowu, Joshua Rodriguez, Madison M. Hunt, Soma Pathak, Laura P. Weber
Abstract:
Endotracheal intubation is a core procedure performed by emergency physicians. This procedure is a high risk, and failure results in substantial morbidity and mortality. Fiberoptic intubation (FOI) is the standard of care in difficult airway protocols, yet no widespread practice exists for training emergency medicine (EM) residents in the technical acquisition of FOI skills. Simulation on mannequins is commonly utilized to teach advanced airway techniques. As part of a program to introduce FOI into our ED, residents received hands-on training in FOI as part of our weekly resident education conference. We hypothesized that prior to the hands-on training, residents had little experience with FOI and were uncomfortable with using fiberoptic as a modality. We further hypothesized that resident comfort with FOI would increase following the training. The education intervention consisted of two hours of focused airway teaching and skills acquisition for PGY 1-4 residents. One hour was dedicated to four case-based learning stations focusing on standard, pediatric, facial trauma, and burn airways. Direct, video, and fiberoptic airway equipment were available to use at the residents’ discretion to intubate mannequins at each station. The second hour involved direct instructor supervision and immediate feedback during deliberate practice for FOI of a mannequin. Prior to the hands-on training, a pre-survey was sent via email to all EM residents at NYU Grossman School of Medicine. The pre-survey asked how many FOI residents have performed in the ED, OR, and on a mannequin. The pre-survey and a post-survey asked residents to rate their comfort with FOI on a 5-point Likert scale ("extremely uncomfortable", "somewhat uncomfortable", "neither comfortable nor uncomfortable", "somewhat comfortable", and "extremely comfortable"). The post-survey was administered on site immediately following the training. A two-sample chi-square test of independence was calculated comparing self-reported resident comfort on the pre- and post-survey (α ≤ 0.05). Thirty-six of a total of 70 residents (51.4%) completed the pre-survey. Of pre-survey respondents, 34 residents (94.4%) had performed 0, 1 resident (2.8%) had performed 1, and 1 resident (2.8%) had performed 2 FOI in the ED. Twenty-five residents (69.4%) had performed 0, 6 residents (16.7%) had performed 1, 2 residents (5.6%) had performed 2, 1 resident (2.8%) had performed 3, and 2 residents (5.6%) had performed 4 FOI in the OR. Seven residents (19.4%) had performed 0, and 16 residents (44.4%) had performed 5 or greater FOI on a mannequin. 29 residents (41.4%) attended the hands-on training, and 27 out of 29 residents (93.1%) completed the post-survey. Self-reported resident comfort with FOI significantly increased in post-survey compared to pre-survey questionnaire responses (p = 0.00034). Twenty-one of 27 residents (77.8%) report being “somewhat comfortable” or “extremely comfortable” with FOI on the post-survey, compared to 9 of 35 residents (25.8%) on the pre-survey. We show that dedicated FOI training is associated with increased learner comfort with such techniques. Further direction includes studying technical competency, skill retention, translation to direct patient care, and optimal frequency and methodology of future FOI education.Keywords: airway, emergency medicine, fiberoptic intubation, medical simulation, skill acquisition
Procedia PDF Downloads 1801165 Innovation Management in E-Health Care: The Implementation of New Technologies for Health Care in Europe and the USA
Authors: Dariusz M. Trzmielak, William Bradley Zehner, Elin Oftedal, Ilona Lipka-Matusiak
Abstract:
The use of new technologies should create new value for all stakeholders in the healthcare system. The article focuses on demonstrating that technologies or products typically enable new functionality, a higher standard of service, or a higher level of knowledge and competence for clinicians. It also highlights the key benefits that can be achieved through the use of artificial intelligence, such as relieving clinicians of many tasks and enabling the expansion and greater specialisation of healthcare services. The comparative analysis allowed the authors to create a classification of new technologies in e-health according to health needs and benefits for patients, doctors, and healthcare systems, i.e., the main stakeholders in the implementation of new technologies and products in healthcare. The added value of the development of new technologies in healthcare is diagnosed. The work is both theoretical and practical in nature. The primary research methods are bibliographic analysis and analysis of research data and market potential of new solutions for healthcare organisations. The bibliographic analysis is complemented by the author's case studies of implemented technologies, mostly based on artificial intelligence or telemedicine. In the past, patients were often passive recipients, the end point of the service delivery system, rather than stakeholders in the system. One of the dangers of powerful new technologies is that patients may become even more marginalised. Healthcare will be provided and delivered in an increasingly administrative, programmed way. The doctor may also become a robot, carrying out programmed activities - using 'non-human services'. An alternative approach is to put the patient at the centre, using technologies, products, and services that allow them to design and control technologies based on their own needs. An important contribution to the discussion is to open up the different dimensions of the user (carer and patient) and to make them aware of healthcare units implementing new technologies. The authors of this article outline the importance of three types of patients in the successful implementation of new medical solutions. The impact of implemented technologies is analysed based on: 1) "Informed users", who are able to use the technology based on a better understanding of it; 2) "Engaged users" who play an active role in the broader healthcare system as a result of the technology; 3) "Innovative users" who bring their own ideas to the table based on a deeper understanding of healthcare issues. The authors' research hypothesis is that the distinction between informed, engaged, and innovative users has an impact on the perceived and actual quality of healthcare services. The analysis is based on case studies of new solutions implemented in different medical centres. In addition, based on the observations of the Polish author, who is a manager at the largest medical research institute in Poland, with analytical input from American and Norwegian partners, the added value of the implementations for patients, clinicians, and the healthcare system will be demonstrated.Keywords: innovation, management, medicine, e-health, artificial intelligence
Procedia PDF Downloads 241164 Implementation of a PDMS Microdevice for the Improved Purification of Circulating MicroRNAs
Authors: G. C. Santini, C. Potrich, L. Lunelli, L. Vanzetti, S. Marasso, M. Cocuzza, C. Pederzolli
Abstract:
The relevance of circulating miRNAs as non-invasive biomarkers for several pathologies is nowadays undoubtedly clear, as they have been found to have both diagnostic and prognostic value able to add fundamental information to patients’ clinical picture. The availability of these data, however, relies on a time-consuming process spanning from the sample collection and processing to the data analysis. In light of this, strategies which are able to ease this procedure are in high demand and considerable effort have been made in developing Lab-on-a-chip (LOC) devices able to speed up and standardise the bench work. In this context, a very promising polydimethylsiloxane (PDMS)-based microdevice which integrates the processing of the biological sample, i.e. purification of extracellular miRNAs, and reverse transcription was previously developed in our lab. In this study, we aimed at the improvement of the miRNA extraction performances of this micro device by increasing the ability of its surface to absorb extracellular miRNAs from biological samples. For this purpose, we focused on the modulation of two properties of the material: roughness and charge. PDMS surface roughness was modulated by casting with several templates (terminated with silicon oxide coated by a thin anti-adhesion aluminum layer), followed by a panel of curing conditions. Atomic force microscopy (AFM) was employed to estimate changes at the nanometric scale. To introduce modifications in surface charge we functionalized PDMS with different mixes of positively charged 3-aminopropyltrimethoxysilanes (APTMS) and neutral poly(ethylene glycol) silane (PEG). The surface chemical composition was characterized by X-ray photoelectron spectroscopy (XPS) and the number of exposed primary amines was quantified with the reagent sulfosuccinimidyl-4-o-(4,4-dimethoxytrityl) butyrate (s-SDTB). As our final end point, the adsorption rate of all these different conditions was assessed by fluorescence microscopy by incubating a synthetic fluorescently-labeled miRNA. Our preliminary analysis identified casting on thermally grown silicon oxide, followed by a curing step at 85°C for 1 hour, as the most efficient technique to obtain a PDMS surface roughness in the nanometric scaleable to trap miRNA. In addition, functionalisation with 0.1% APTMS and 0.9% PEG was found to be a necessary step to significantly increase the amount of microRNA adsorbed on the surface, therefore, available for further steps as on-chip reverse transcription. These findings show a substantial improvement in the extraction efficiency of our PDMS microdevice, ultimately leading to an important step forward in the development of an innovative, easy-to-use and integrated system for the direct purification of less abundant circulating microRNAs.Keywords: circulating miRNAs, diagnostics, Lab-on-a-chip, polydimethylsiloxane (PDMS)
Procedia PDF Downloads 3201163 Selection of Strategic Suppliers for Partnership: A Model with Two Stages Approach
Authors: Safak Isik, Ozalp Vayvay
Abstract:
Strategic partnerships with suppliers play a vital role for the long-term value-based supply chain. This strategic collaboration keeps still being one of the top priority of many business organizations in order to create more additional value; benefiting mainly from supplier’s specialization, capacity and innovative power, securing supply and better managing costs and quality. However, many organizations encounter difficulties in initiating, developing and managing those partnerships and many attempts result in failures. One of the reasons for such failure is the incompatibility of members of this partnership or in other words wrong supplier selection which emphasize the significance of the selection process since it is the beginning stage. An effective selection process of strategic suppliers is critical to the success of the partnership. Although there are several research studies to select the suppliers in literature, only a few of them is related to strategic supplier selection for long-term partnership. The purpose of this study is to propose a conceptual model for the selection of strategic partnership suppliers. A two-stage approach has been used in proposed model incorporating first segmentation and second selection. In the first stage; considering the fact that not all suppliers are strategically equal and instead of a long list of potential suppliers, Kraljic’s purchasing portfolio matrix can be used for segmentation. This supplier segmentation is the process of categorizing suppliers based on a defined set of criteria in order to identify types of suppliers and determine potential suppliers for strategic partnership. In the second stage, from a pool of potential suppliers defined at first phase, a comprehensive evaluation and selection can be performed to finally define strategic suppliers considering various tangible and intangible criteria. Since a long-term relationship with strategic suppliers is anticipated, criteria should consider both current and future status of the supplier. Based on an extensive literature review; strategical, operational and organizational criteria have been determined and elaborated. The result of the selection can also be used to determine suppliers who are not ready for a partnership but to be developed for strategic partnership. Since the model is based on multiple criteria for both stages, it provides a framework for further utilization of Multi-Criteria Decision Making (MCDM) techniques. The model may also be applied to a wide range of industries and involve managerial features in business organizations.Keywords: Kraljic’s matrix, purchasing portfolio, strategic supplier selection, supplier collaboration, supplier partnership, supplier segmentation
Procedia PDF Downloads 2401162 Dietary Exposure Assessment of Potentially Toxic Trace Elements in Fruits and Vegetables Grown in Akhtala, Armenia
Authors: Davit Pipoyan, Meline Beglaryan, Nicolò Merendino
Abstract:
Mining industry is one of the priority sectors of Armenian economy. Along with the solution of some socio-economic development, it brings about numerous environmental problems, especially toxic element pollution, which largely influences the safety of agricultural products. In addition, accumulation of toxic elements in agricultural products, mainly in edible parts of plants represents a direct pathway for their penetration into the human food chain. In Armenia, the share of plant origin food in overall diet is significantly high, so estimation of dietary intakes of toxic trace elements via consumption of selected fruits and vegetables are of great importance for observing the underlying health risks. Therefore, the present study was aimed to assess dietary exposure of potentially toxic trace elements through the intake of locally grown fruits and vegetables in Akhtala community (Armenia), where not only mining industry is developed, but also cultivation of fruits and vegetables. Moreover, this investigation represents one of the very first attempts to estimate human dietary exposure of potentially toxic trace elements in the study area. Samples of some commonly grown fruits and vegetables (fig, cornel, raspberry, grape, apple, plum, maize, bean, potato, cucumber, onion, greens) were randomly collected from several home gardens located near mining areas in Akhtala community. The concentration of Cu, Mo, Ni, Cr, Pb, Zn, Hg, As and Cd in samples were determined by using an atomic absorption spectrophotometer (AAS). Precision and accuracy of analyses were guaranteed by repeated analysis of samples against NIST Standard Reference Materials. For a diet study, individual-based approach was used, so the consumption of selected fruits and vegetables was investigated through food frequency questionnaire (FFQ). Combining concentration data with contamination data, the estimated daily intakes (EDI) and cumulative daily intakes were assessed and compared with health-based guidance values (HBGVs). According to the determined concentrations of the studied trace elements in fruits and vegetables, it can be stressed that some trace elements (Cu, Ni, Pb, Zn) among the majority of samples exceeded maximum allowable limits set by international organizations. Meanwhile, others (Cr, Hg, As, Cd, Mo) either did not exceed these limits or still do not have established allowable limits. The obtained results indicated that only for Cu the EDI values exceeded dietary reference intake (0.01 mg/kg/Bw/day) for some investigated fruits and vegetables in decreasing order of potato > grape > bean > raspberry > fig > greens. In contrast to this, for combined consumption of selected fruits and vegetables estimated cumulative daily intakes exceeded reference doses in the following sequence: Zn > Cu > Ni > Mo > Pb. It may be concluded that habitual and combined consumption of the above mentioned fruits and vegetables can pose a health risk to the local population. Hence, further detailed studies are needed for the overall assessment of potential health implications taking into consideration adverse health effects posed by more than one toxic trace element.Keywords: daily intake, dietary exposure, fruits, trace elements, vegetables
Procedia PDF Downloads 3041161 Development of Peaceful Wellbeing in Executive Practitioners through Mindfulness-Based Practices
Authors: Narumon Jiwattanasuk, Phrakrupalad Pannavoravat, Pataraporn Sirikanchana
Abstract:
Mindfulness has become a perspective addressing positive wellbeing these days. The aims of this paper are to analyze the problems of executive meditation practitioners at the Buddhamahametta Foundation in Thailand and to provide recommendations on the process to develop peaceful wellbeing in executive meditation practitioners by applying the principles of the four foundations of mindfulness. This study is particularly focused on executives because there is not much research focusing on the well-being development of executives, and the researcher recognizes that executives can be an example within their organizations. This would be a significant influence on their employees and their families to be interested in practicing mindfulness. This improvement will then grow from an individual to the surrounding community such as family, workplace, society, and the nation. This would lead to happiness at the national level, which is the expectation of this research. The paper highlights mindfulness practices that can be performed on a daily basis. This study is qualitative research, and there are 10 key participants who are executives from various sectors such as hospitality, healthcare, retail, power energy, and so on. Three mindfulness-based courses were conducted over a period of 8 months, and in-depth interviews were done before the first course as well as at the end of every course. In total, four in-depth interviews were conducted. The information collected from the interviews was analyzed in order to create the process to develop peaceful well-being. Focus group discussions with the mindfulness specialists were conducted to help develop the mindfulness program as well. As a result of this research, it is found that the executives faced the following problems: stress, negative thinking loops, losing temper, seeking acceptance, worry about uncontrollable external factors, unable to control their words, and weight gain. The cultivation of the four foundations of mindfulness can develop peaceful wellbeing. The results showed that after the key informant executives attended the mindfulness courses and practiced mindfulness regularly, they have developed peaceful well-being in all aspects such as physical, psychological, behavioral, and intellectual by applying 12 mindfulness-based activities. The development of wellbeing, in the conclusion of this study, also includes various tools to support the continuing practice, including the handout of guided mindfulness practice, VDO clips about mindfulness practice, the online dhamma channel, and mobile applications to support regular mindfulness-based practices.Keywords: executive, mindfulness activities, stress, wellbeing
Procedia PDF Downloads 1251160 Understanding National Soccer Jersey Design from a Material Culture Perspective: A Content Analysis and Wardrobe Interviews with Canadian Consumers
Authors: Olivia Garcia, Sandra Tullio-Pow
Abstract:
The purpose of this study was to understand what design attributes make the most ideal (wearable and memorable) national soccer jersey. The research probed Canadian soccer enthusiasts to better understand their jersey-purchasing rationale. The research questions framing this study were: how do consumers feel about their jerseys? How do these feelings influence their choices? There has been limited research on soccer jerseys from a material culture perspective, and it is not inclusive of national soccer jerseys. The results of this study may be used for product developers and advertisers who are looking to better understand the consumer base for national soccer jersey design. A mixed methods approach informed the research. To begin, a content analysis of all the home jerseys from the 2018 World Cup was done. Information such as size range, main colour, fibre content, brand, collar details, availability, sleeve length, place of manufacturing, pattern, price, fabric as per company, neckline, availability on company website, jersey inspiration, and badge/crest details were noted. Following the content analysis, wardrobe interviews were conducted with six consumers/fans. Participants brought two or more jerseys to the interviews, where the jerseys acted as clothing probes to recount information. Interview questions were semi-structured and focused on the participants’ relationship with the sport, their personal background, who they cheered for, why they bought the jerseys, and fit preferences. The goal of the inquiry was to pull out information on how participants feel about their jerseys and why. Finally, an interview with an industry professional was done. This interview was semi-structured, focusing on basic questions regarding sportswear design, sales, the popularity of soccer, and the manufacturing and marketing process. The findings proved that national soccer jerseys are an integral part of material culture. Women liked more fitted jerseys, and men liked more comfortable jerseys. Jerseys should be made with a cooling, comfortable fabric and should always prevent peeling. The symbols on jerseys are there to convey a team’s history and are most typically placed on the left chest. Jerseys should always represent the flag and/or the country’s colours and should use designs that are both fashionable and innovative. Jersey design should always consider the opinions of the consumers to help influence the design process. Jerseys should always use concepts surrounding culture, as consumers feel connected to the jerseys that represent the culture and/or family they have grown up with. Jerseys should use a team’s history, as well as the nostalgia associated with the team, as consumers prefer jerseys that reflect important moments in soccer. Jerseys must also sit at a reasonable price point for consumers, with an experience to go along with the jersey purchase. In conclusion, national soccer jerseys are considered sites of attachment and memories and play an integral part in the study of material culture.Keywords: Design, Fashion, Material Culture, Sport
Procedia PDF Downloads 106