Search results for: simple coacervation
2779 The 'Plain Style' in the Theory and Practice of Project Design: Contributions to the Shaping of an Urban Image on the Waterfront Prior to the 1755 Earthquake
Authors: Armenio Lopes, Carlos Ferreira
Abstract:
In the specific context of the Iberian Union between 1580 and 1640, characteristics emerged in Portuguese architecture that stood out from the main architectural production of the period. Recognised and identified aspects that had begun making their appearance decades before (1521) became significantly more marked during the Hapsburg-Spanish occupation. Distinctive even from the imperialist language of Spain, this trend would endure even after the restoration of independence (1706), continuing through to the start of the age of absolutism. Or perhaps not. This trend, recognised as Plain Style (Kubler), associated with a certain scarcity of resources, involved a certain formal and decorative simplification, as well as a particular set of conventions that would subsequently mark the landscape. This expression could also be seen as a means of asserting a certain spirit of independence as the Iberian Union breathed its last. The image of a simple, bare-bones architecture with purer design lines is associated by various authors –most notably Kubler– with the narratives of modernism, to whose principles it is similar, in a context-specific to the period. There is a contrast with some of the exuberance of the baroque or its expression in the Manueline period, in a similar fashion to modernism's responses to nineteenth-century eclecticism. This assertion and practice of simple architecture, drafted from the interpretation of the treaties, and highlighting a certain classical inspiration, was to become a benchmark in the theory of architecture, spanning the Baroque and Mannerism, until achieving contemporary recognition within certain originality and modernity. At a time when the baroque and its scenography became generally very widespread, it is important also to recognise the role played by plain style architecture in the construction of a rather complex and contradictory waterfront landscape, featuring promises of exuberance and more discrete practices.Keywords: Carlos Mardel, Lisbon's waterfront, plain style, urban image on the waterfront
Procedia PDF Downloads 1402778 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit
Procedia PDF Downloads 1452777 Detection of JC Virus DNA and T-Ag Expression in a Subpopulation of Tunisian Colorectal Carcinomas
Authors: Wafa Toumi, Alessandro Ripalti, Luigi Ricciardiello, Dalila Gargouri, Jamel Kharrat, Abderraouf Cherif, Ahmed Bouhafa, Slim Jarboui, Mohamed Zili, Ridha Khelifa
Abstract:
Background & aims: Colorectal cancer (CRC) is one of the most common malignancies throughout the world. Several risk factors, both genetic and environmental, including viral infections, have been linked to colorectal carcinogenesis. A few studies report the detection of human polyomavirus JC (JCV) DNA and transformation antigen (T-Ag) in a fraction of the colorectal tumors studied and suggest an association of this virus with CRC. In order to investigate whether such an association of JCV with CRC will hold in a different epidemiological setting, we looked for the presence of JCV DNA and T-Ag expression in a group of Tunisian CRC patients. Methods: Fresh colorectal mucosa biopsies were obtained from 17 healthy volunteers and from both colorectal tumors and adjacent normal tissues of 47 CRC patients. DNA was extracted from fresh biopsies or from formalin-fixed, paraffin-embedded tissue sections using the Invitrogen Purelink Genomic DNA mini Kit. A simple PCR and a nested PCR were used to amplify a region of the T-Ag gene. The obtained PCR products revealed a 154 bp and a 98 bp bands, respectively. Specificity was confirmed by sequencing of the PCR products. T-Ag expression was determined by immunohistochemical staining using a mouse monoclonal antibody (clone PAb416) directed against SV40 T-Ag that cross reacts with JCV T-Ag. Results: JCV DNA was found in 12 (25%) and 22 (46%) of the CRC tumors by simple PCR and by nested PCR, respectively. All paired adjacent normal mucosa biopsies were negative for viral DNA. Sequencing of the DNA amplicons obtained confirmed the authenticity of T-Ag sequences. Immunohistochemical staining showed nuclear T-Ag expression in all 22 JCV DNA- positive samples and in 3 additional tumor samples which appeared DNA-negative by PCR. Conclusions: These results suggest an association of JCV with a subpopulation of Tunisian colorectal tumors.Keywords: colorectal cancer, immunohistochemistry, Polyomavirus JC, PCR
Procedia PDF Downloads 3632776 A Comparative Legal Enquiry on the Concept of Invention
Authors: Giovanna Carugno
Abstract:
The concept of invention is rarely scrutinized by legal scholars since it is a slippery one, full of nuances and difficult to be defined. When does an idea become relevant for the patent law? When is it simply possible to talk of what an invention is? It is the first question to be answered to obtain a patent, but it is sometimes neglected by treaties or reduced to very simple and automatically re-cited definitions. Maybe, also because it is more a transnational and cultural concept than a mere institution of law. Tautology is used to avoid the challenge (in the United States patent regulation, the inventor is the one who contributed to have a patentable invention); in other case, a clear definition is surprisingly not even provided (see, e.g., the European Patent Convention). In Europe, the issue is still more complicated because there are several different solutions elaborate inorganically be national systems of courts varying one to the other only with the aim of solving different IP cases. Also a neighbor domain, like copyright law, is not assisting us in the research, since an author in this field is entitles to be the 'inventor' or the 'author' and to protect as far as he produces something new. Novelty is not enough in patent law. A simple distinction between mere improvement that can be achieved by a man skilled in the art (a sort of reasonable man, in other sectors) or a change that is not obvious rising to the dignity of protection seems not going too far. It is not still defining this concept; it is rigid and not fruitful. So, setting aside for the moment the issue related to the definition of the invention/inventor, our proposal is to scrutinize the possible self-sufficiency of a system in which the inventor or the improver should be awarded of royalties or similar compensation according to the economic improvement he was able to bring. The law, in this case, is in the penumbras of misleading concepts, divided between facts that are obscure and technical, and not involving necessarily legal issues. The aim of this paper is to find out a single definition (or, at least, the minimum elements common in the different legal systems) of what is (legally) an invention and what can be the hints to practically identify an authentic invention. In conclusion, it will propose an alternative system in which the invention is not considered anymore and the only thing that matters are the revenues generated by technological improvement, caused by the worker's activity.Keywords: comparative law, intellectual property, invention, patents
Procedia PDF Downloads 1842775 Gas Flow, Time, Distance Dynamic Modelling
Authors: A. Abdul-Ameer
Abstract:
The equations governing the distance, pressure- volume flow relationships for the pipeline transportation of gaseous mixtures, are considered. A derivation based on differential calculus, for an element of this system model, is addressed. Solutions, yielding the input- output response following pressure changes, are reviewed. The technical problems associated with these analytical results are identified. Procedures resolving these difficulties providing thereby an attractive, simple, analysis route are outlined. Computed responses, validating thereby calculated predictions, are presented.Keywords: pressure, distance, flow, dissipation, models
Procedia PDF Downloads 4752774 Y-Y’ Calculus in Physical Sciences and Engineering with Particular Reference to Fundamentals of Soil Consolidation
Authors: Sudhir Kumar Tewatia, Kanishck Tewatia, Anttriksh Tewatia
Abstract:
Advancements in soil consolidation are discussed, and further improvements are proposed with particular reference to Tewatia’s Y-Y’ Approach, which is called the Settlement versus Rate of Settlement Approach in consolidation. A branch of calculus named Y-Y' (or y versus dy/dx) is suggested (as compared to the common X-Y', x versus dy/dx, dy/dx versus x or Newton-Leibniz branch) that solves some complicated/unsolved theoretical and practical problems in physical sciences (Physics, Chemistry, Mathematics, Biology, and allied sciences) and engineering in an amazingly simple and short manner, particularly when independent variable X is unknown and X-Y' Approach can’t be used. Complicated theoretical and practical problems in 1D, 2D, 3D Primary and Secondary consolidations with non-uniform gradual loading and irregularly shaped clays are solved with elementary school level Y-Y' Approach, and it is interesting to note that in X-Y' Approach, equations become more difficult while we move from one to three dimensions, but in Y-Y' Approach even 2D/3D equations are very simple to derive, solve, and use; rather easier sometimes. This branch of calculus will have a far-reaching impact on understanding and solving the problems in different fields of physical sciences and engineering that were hitherto unsolved or difficult to be solved by normal calculus/numerical/computer methods. Some particular cases from soil consolidation that basically creeps and diffusion equations in isolation and in combination with each other are taken for comparison with heat transfer. The Y-Y’ Approach can similarly be applied in wave equations and other fields wherever normal calculus works or fails. Soil mechanics uses mathematical analogies from other fields of physical sciences and engineering to solve theoretical and practical problems; for example, consolidation theory is a replica of the heat equation from thermodynamics with the addition of the effective stress principle. An attempt is made to give them mathematical analogies.Keywords: calculus, clay, consolidation, creep, diffusion, heat, settlement
Procedia PDF Downloads 962773 Bulk Modification of Poly(Dimethylsiloxane) for Biomedical Applications
Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta
Abstract:
In the last decade microfabrication processes including rapid prototyping techniques have advanced rapidly and achieved a fairly matured stage. These advances encouraged and enabled the use of microfluidic devices by a wider range of users with applications in biological separations, and cell and organoid cultures. Accordingly, a significant current challenge in the field is controlling biomolecular interactions at interfaces and the development of novel biomaterials to satisfy the unique needs of the biomedical applications. Poly(dimethylsiloxane) (PDMS) is by far the most preferred material in the fabrication of microfluidic devices. This can be attributed its favorable properties, including: (1) simple fabrication by replica molding, (2) good mechanical properties, (3) excellent optical transparency from 240 to 1100 nm, (4) biocompatibility and non-toxicity, and (5) high gas permeability. However, high hydrophobicity (water contact angle ~108°±7°) of PDMS often limits its applications where solutions containing biological samples are concerned. In our study, we created a simple, easy method for modifying the surface chemistry of PDMS microfluidic devices through the addition of surface-segregating additives during manufacture. In this method, a surface segregating copolymer is added to precursors for silicone and the desired device is manufactured following the usual methods. When the device surface is in contact with an aqueous solution, the copolymer self-organizes to expose its hydrophilic segments to the surface, making the surface of the silicone device more hydrophilic. This can lead to several improved performance criteria including lower fouling, lower non-specific adsorption, and better wettability. Specifically, this approach is expected to be useful for the manufacture of microfluidic devices. It is also likely to be useful for manufacturing silicone tubing and other materials, biomaterial applications, and surface coatings.Keywords: microfluidics, non-specific protein adsorption, PDMS, PEG, copolymer
Procedia PDF Downloads 2672772 Easy Way of Optimal Process-Storage Network Design
Authors: Gyeongbeom Yi
Abstract:
The purpose of this study is to introduce the analytic solution for determining the optimal capacity (lot-size) of a multiproduct, multistage production and inventory system to meet the finished product demand. Reasonable decision-making about the capacity of processes and storage units is an important subject for industry. The industrial solution for this subject is to use the classical economic lot sizing method, EOQ/EPQ (Economic Order Quantity/Economic Production Quantity) model, incorporated with practical experience. However, the unrealistic material flow assumption of the EOQ/EPQ model is not suitable for chemical plant design with highly interlinked processes and storage units. This study overcomes the limitation of the classical lot sizing method developed on the basis of the single product and single stage assumption. The superstructure of the plant considered consists of a network of serially and/or parallelly interlinked processes and storage units. The processes involve chemical reactions with multiple feedstock materials and multiple products as well as mixing, splitting or transportation of materials. The objective function for optimization is minimizing the total cost composed of setup and inventory holding costs as well as the capital costs of constructing processes and storage units. A novel production and inventory analysis method, PSW (Periodic Square Wave) model, is applied. The advantage of the PSW model comes from the fact that the model provides a set of simple analytic solutions in spite of a realistic description of the material flow between processes and storage units. The resulting simple analytic solution can greatly enhance the proper and quick investment decision for plant design and operation problem confronted in diverse economic situations.Keywords: analytic solution, optimal design, process-storage network
Procedia PDF Downloads 3312771 Using Hyperspectral Sensor and Machine Learning to Predict Water Potentials of Wild Blueberries during Drought Treatment
Authors: Yongjiang Zhang, Kallol Barai, Umesh R. Hodeghatta, Trang Tran, Vikas Dhiman
Abstract:
Detecting water stress on crops early and accurately is crucial to minimize its impact. This study aims to measure water stress in wild blueberry crops non-destructively by analyzing proximal hyperspectral data. The data collection took place in the summer growing season of 2022. A drought experiment was conducted on wild blueberries in the randomized block design in the greenhouse, incorporating various genotypes and irrigation treatments. Hyperspectral data ( spectral range: 400-1000 nm) using a handheld spectroradiometer and leaf water potential data using a pressure chamber were collected from wild blueberry plants. Machine learning techniques, including multiple regression analysis and random forest models, were employed to predict leaf water potential (MPa). We explored the optimal wavelength bands for simple differences (RY1-R Y2), simple ratios (RY1/RY2), and normalized differences (|RY1-R Y2|/ (RY1-R Y2)). NDWI ((R857 - R1241)/(R857 + R1241)), SD (R2188 – R2245), and SR (R1752 / R1756) emerged as top predictors for predicting leaf water potential, significantly contributing to the highest model performance. The base learner models achieved an R-squared value of approximately 0.81, indicating their capacity to explain 81% of the variance. Research is underway to develop a neural vegetation index (NVI) that automates the process of index development by searching for specific wavelengths in the space ratio of linear functions of reflectance. The NVI framework could work across species and predict different physiological parameters.Keywords: hyperspectral reflectance, water potential, spectral indices, machine learning, wild blueberries, optimal bands
Procedia PDF Downloads 672770 Defuzzification of Periodic Membership Function on Circular Coordinates
Authors: Takashi Mitsuishi, Koji Saigusa
Abstract:
This paper presents circular polar coordinates transformation of periodic fuzzy membership function. The purpose is identification of domain of periodic membership functions in consequent part of IF-THEN rules. The proposed methods are applied to the simple color construct system.Keywords: periodic membership function, polar coordinates transformation, defuzzification, circular coordinates
Procedia PDF Downloads 3112769 Decoloriation of Rhodamine-B Dye by Pseudomonas putida on Activated Carbon
Authors: U. K. Ghosh, A. Ullhyan
Abstract:
Activated carbon prepared from mustard stalk was applied to decolorize Rhodamine-B dye bearing synthetic wastewater by simple adsorption and simultaneous adsorption and biodegradation (SAB) using Pseudomonas putida MTCC 1194. Results showed that percentage of Rhodamine-B dye removal was 82% for adsorption and 99.3% for SAB at pH 6.5, adsorbent dose 10 g/L and temperature 32ºC.Keywords: activated carbon, mustard stalk, Rhodamine-B, adsorption, SAB, Pseudomonas putida
Procedia PDF Downloads 3602768 Upper Bounds on the Paired Domination Number of Cubic Graphs
Authors: Bin Sheng, Changhong Lu
Abstract:
Let G be a simple undirected graph with no isolated vertex. A paired dominating set of G is a dominating set which induces a subgraph that has a perfect matching. The paired domination number of G, denoted by γₚᵣ(G), is the size of its smallest paired dominating set. Goddard and Henning conjectured that γₚᵣ(G) ≤ 4n/7 holds for every graph G with δ(G) ≥ 3, except the Petersen Graph. In this paper, we prove this conjecture for cubic graphs.Keywords: paired dominating set, upper bound, cubic graphs, weight function
Procedia PDF Downloads 2422767 Effect of Baffles on the Cooling of Electronic Components
Authors: O. Bendermel, C. Seladji, M. Khaouani
Abstract:
In this work, we made a numerical study of the thermal and dynamic behaviour of air in a horizontal channel with electronic components. The influence to use baffles on the profiles of velocity and temperature is discussed. The finite volume method and the algorithm Simple are used for solving the equations of conservation of mass, momentum and energy. The results found show that baffles improve heat transfer between the cooling air and electronic components. The velocity will increase from 3 times per rapport of the initial velocity.Keywords: electronic components, baffles, cooling, fluids engineering
Procedia PDF Downloads 2972766 Identification and Quantification of Lisinopril from Pure, Formulated and Urine Samples by Micellar Thin Layer Chromatography
Authors: Sudhanshu Sharma
Abstract:
Lisinopril, 1-[N-{(s)-I-carboxy-3 phenyl propyl}-L-proline dehydrate is a lysine analog of enalaprilat, the active metabolite of enalapril. It is long-acting, non-sulhydryl angiotensin-converting enzyme (ACE) inhibitor that is used for the treatment of hypertension and congestive heart failure in daily dosage 10-80 mg. Pharmacological activity of lisinopril has been proved in various experimental and clinical studies. Owing to its importance and widespread use, efforts have been made towards the development of simple and reliable analytical methods. As per our literature survey, lisinopril in pharmaceutical formulations has been determined by various analytical methodologies like polaragraphy, potentiometry, and spectrophotometry, but most of these analytical methods are not too suitable for the Identification of lisinopril from clinical samples because of the interferences caused by the amino acids and amino groups containing metabolites present in biological samples. This report is an attempt in the direction of developing a simple and reliable method for on plate identification and quantification of lisinopril in pharmaceutical formulations as well as from human urine samples using silica gel H layers developed with a new mobile phase comprising of micellar solutions of N-cetyl-N, N, N-trimethylammonium bromide (CTAB). Micellar solutions have found numerous practical applications in many areas of separation science. Micellar liquid chromatography (MLC) has gained immense popularity and wider applicability due to operational simplicity, cost effectiveness, relatively non-toxicity and enhanced separation efficiency, low aggressiveness. Incorporation of aqueous micellar solutions as mobile phase was pioneered by Armstrong and Terrill as they accentuated the importance of TLC where simultaneous separation of ionic or non-ionic species in a variety of matrices is required. A peculiarity of the micellar mobile phases (MMPs) is that they have no macroscopic analogues, as a result the typical separations can be easily achieved by using MMPs than aqueous organic mobile phases. Previously MMPs were successfully employed in TLC based critical separations of aromatic hydrocarbons, nucleotides, vitamin K1 and K5, o-, m- and p- aminophenol, amino acids, separation of penicillins. The human urine analysis for identification of selected drugs and their metabolites has emerged as an important investigation tool in forensic drug analysis. Among all chromatographic methods available only thin layer chromatography (TLC) enables a simple fast and effective separation of the complex mixtures present in various biological samples and is recommended as an approved testing for forensic drug analysis by federal Law. TLC proved its applicability during successful separation of bio-active amines, carbohydrates, enzymes, porphyrins, and their precursors, alkaloid and drugs from urine samples.Keywords: lisnopril, surfactant, chromatography, micellar solutions
Procedia PDF Downloads 3672765 Climate Change and Urban Flooding: The Need to Rethinking Urban Flood Management through Resilience
Authors: Suresh Hettiarachchi, Conrad Wasko, Ashish Sharma
Abstract:
The ever changing and expanding urban landscape increases the stress on urban systems to support and maintain safe and functional living spaces. Flooding presents one of the more serious threats to this safety, putting a larger number of people in harm’s way in congested urban settings. Climate change is adding to this stress by creating a dichotomy in the urban flood response. On the one hand, climate change is causing storms to intensify, resulting in more destructive, rarer floods, while on the other hand, longer dry periods are decreasing the severity of more frequent, less intense floods. This variability is creating a need to be more agile and innovative in how we design for and manage urban flooding. Here, we argue that to cope with this challenge climate change brings, we need to move towards urban flood management through resilience rather than flood prevention. We also argue that dealing with the larger variation in flood response to climate change means that we need to look at flooding from all aspects rather than the single-dimensional focus of flood depths and extents. In essence, we need to rethink how we manage flooding in the urban space. This change in our thought process and approach to flood management requires a practical way to assess and quantify resilience that is built into the urban landscape so that informed decision-making can support the required changes in planning and infrastructure design. Towards that end, we propose a Simple Urban Flood Resilience Index (SUFRI) based on a robust definition of resilience as a tool to assess flood resilience. The application of a simple resilience index such as the SUFRI can provide a practical tool that considers urban flood management in a multi-dimensional way and can present solutions that were not previously considered. When such an index is grounded on a clear and relevant definition of resilience, it can be a reliable and defensible way to assess and assist the process of adapting to the increasing challenges in urban flood management with climate change.Keywords: urban flood resilience, climate change, flood management, flood modelling
Procedia PDF Downloads 502764 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process
Authors: Shan-Hong Ying, Ying-Fang Wang
Abstract:
A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC
Procedia PDF Downloads 1522763 Simplified Measurement of Occupational Energy Expenditure
Authors: J. Wicks
Abstract:
Aim: To develop a simple methodology to allow collected heart rate (HR) data from inexpensive wearable devices to be expressed in a suitable format (METs) to quantitate occupational (and recreational) activity. Introduction: Assessment of occupational activity is commonly done by utilizing questionnaires in combination with prescribed MET levels of a vast range of previously measured activities. However for any individual the intensity of performing a specific activity can vary significantly. Ideally objective measurement of individual activity is preferred. Though there are a wide range of HR recording devices there is a distinct lack methodology to allow processing of collected data to quantitate energy expenditure (EE). The HR index equation expresses METs in relation to relative HR i.e. the ratio of activity HR to resting HR. The use of this equation provides a simple utility for objective measurement of EE. Methods: During a typical occupational work period of approximately 8 hours HR data was recorded using a Polar RS 400 wrist monitor. Recorded data was downloaded to a Windows PC and non HR data was stripped from the ASCII file using ‘Notepad’. The HR data was exported to a spread sheet program and sorted by HR range into a histogram format. Three HRs were determined, namely a resting HR (the HR delimiting the lowest 30 minutes of recorded data), a mean HR and a peak HR (the HR delimiting the highest 30 minutes of recorded data). HR indices were calculated (mean index equals mean HR/rest HR and peak index equals peak HR/rest HR) with mean and peak indices being converted to METs using the HR index equation. Conclusion: Inexpensive HR recording devices can be utilized to make reasonable estimates of occupational (or recreational) EE suitable for large scale demographic screening by utilizing the HR index equation. The intrinsic value of the HR index equation is that it is independent of factors that influence absolute HR, namely fitness, smoking and beta-blockade.Keywords: energy expenditure, heart rate histograms, heart rate index, occupational activity
Procedia PDF Downloads 2962762 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method
Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy
Abstract:
The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method
Procedia PDF Downloads 1412761 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 2522760 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 1772759 Efficient Energy Extraction Circuit for Impact Harvesting from High Impedance Sources
Authors: Sherif Keddis, Mohamed Azzam, Norbert Schwesinger
Abstract:
Harvesting mechanical energy from footsteps or other impacts is a possibility to enable wireless autonomous sensor nodes. These can be used for a highly efficient control of connected devices such as lights, security systems, air conditioning systems or other smart home applications. They can also be used for accurate location or occupancy monitoring. Converting the mechanical energy into useful electrical energy can be achieved using the piezoelectric effect offering simple harvesting setups and low deflections. The challenge facing piezoelectric transducers is the achievable amount of energy per impact in the lower mJ range and the management of such low energies. Simple setups for energy extraction such as a full wave bridge connected directly to a capacitor are problematic due to the mismatch between high impedance sources and low impedance storage elements. Efficient energy circuits for piezoelectric harvesters are commonly designed for vibration harvesters and require periodic input energies with predictable frequencies. Due to the sporadic nature of impact harvesters, such circuits are not well suited. This paper presents a self-powered circuit that avoids the impedance mismatch during energy extraction by disconnecting the load until the source reaches its charge peak. The switch is implemented with passive components and works independent from the input frequency. Therefore, this circuit is suited for impact harvesting and sporadic inputs. For the same input energy, this circuit stores 150% of the energy in comparison to a directly connected capacitor to a bridge rectifier. The total efficiency, defined as the ratio of stored energy on a capacitor to available energy measured across a matched resistive load, is 63%. Although the resulting energy is already sufficient to power certain autonomous applications, further optimization of the circuit are still under investigation in order to improve the overall efficiency.Keywords: autonomous sensors, circuit design, energy harvesting, energy management, impact harvester, piezoelectricity
Procedia PDF Downloads 1552758 Prototype of an Interactive Toy from Lego Robotics Kits for Children with Autism
Authors: Ricardo A. Martins, Matheus S. da Silva, Gabriel H. F. Iarossi, Helen C. M. Senefonte, Cinthyan R. S. C. de Barbosa
Abstract:
This paper is the development of a concept of the man/robot interaction. More accurately in developing of an autistic child that have more troubles with interaction, here offers an efficient solution, even though simple; however, less studied for this public. This concept is based on code applied thought out the Lego NXT kit, built for the interpretation of the robot, thereby can create this interaction in a constructive way for children suffering with Autism.Keywords: lego NXT, interaction, BricX, autismo, ANN (Artificial Neural Network), MLP back propagation, hidden layers
Procedia PDF Downloads 5702757 Genetic Analysis of the Endangered Mangrove Species Avicennia Marina in Qatar Detected by Inter-Simple Sequence Repeat DNA Markers
Authors: Talaat Ahmed, Amna Babssail
Abstract:
Mangroves are evergreen trees and grow along the coastal areas of Qatar. The largest and oldest area of mangroves can be found around Al-Thakhira and Al-Khor. Other mangrove areas originate from fairly recent plantings by the government, although unfortunately the picturesque mangrove lake in Al-Wakra has now been uprooted. Avicinnia marina is the predominant mangrove species found in the region. Mangroves protect and stabilize low lying coastal land, and provide protection and food sources for estuarine and coastal fishery food chains. They also serve as feeding, breeding and nursery grounds for a variety of fish, crustaceans, reptiles, birds and other wildlife. A total of 21 individuals of A. marina, representing seven diverse Natural and artificial populations, were sampled throughout its range in Qatar. Leaves from 2-3 randomly selected trees at each location were collected. The locations are as follows: Al-Rawis, Ras-Madpak, Fuwairt, Summaseima, Al-khour, AL-Mafjar and Zekreet. Total genomic DNA was extracted using commercial DNeasy Plant System (Qiagen, Inc., Valencia, CA) kit to be used for genetic diversity analysis. Total of 12 (Inter-Simple Sequence Repeat) ISSR primers were used to amplify DNA fragments using genomic DNA. The 12 ISSR primers amplified polymorphic bands among mangrove samples in different areas as well as within each area indicating the existing of variation within each area and among the different areas of mangrove in Qatar. The results could characterize Avicinnia marina populations exist in different areas of Qatar and establish DNA fingerprint documentations for mangrove population to be used in further studies. Moreover, existing of genetic variation within and among Avicinnia marina populations is a strong indication for the ability of such populations to adapt different environmental conditions in Qatar. This study could be a warning to save mangrove in Qatar and save the environment as well.Keywords: DNA fingerprint, Avicinnia marina, genetic analysis, Qatar
Procedia PDF Downloads 4062756 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity
Authors: Maxim Glushenkov, Alexander Kronberg
Abstract:
Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery
Procedia PDF Downloads 2262755 A Simple Approach to Establish Urban Energy Consumption Map Using the Combination of LiDAR and Thermal Image
Authors: Yu-Cheng Chen, Tzu-Ping Lin, Feng-Yi Lin, Chih-Yu Chen
Abstract:
Due to the urban heat island effect caused by highly development of city, the heat stress increased in recent year rapidly. Resulting in a sharp raise of the energy used in urban area. The heat stress during summer time exacerbated the usage of air conditioning and electric equipment, which caused more energy consumption and anthropogenic heat. Therefore, an accurate and simple method to measure energy used in urban area can be helpful for the architectures and urban planners to develop better energy efficiency goals. This research applies the combination of airborne LiDAR data and thermal imager to provide an innovate method to estimate energy consumption. Owing to the high resolution of remote sensing data, the accurate current volume and total floor area and the surface temperature of building derived from LiDAR and thermal imager can be herein obtained to predict energy used. In the estimate process, the LiDAR data will be divided into four type of land cover which including building, road, vegetation, and other obstacles. In this study, the points belong to building were selected to overlay with the land use information; therefore, the energy consumption can be estimated precisely with the real value of total floor area and energy use index for different use of building. After validating with the real energy used data from the government, the result shows the higher building in high development area like commercial district will present in higher energy consumption, caused by the large quantity of total floor area and more anthropogenic heat. Furthermore, because of the surface temperature can be warm up by electric equipment used, this study also applies the thermal image of building to find the hot spots of energy used and make the estimation method more complete.Keywords: urban heat island, urban planning, LiDAR, thermal imager, energy consumption
Procedia PDF Downloads 2392754 Implementation and Challenges of Assessment Methods in the Case of Physical Education Class in Some Selected Preparatory Schools of Kirkos Sub-City
Authors: Kibreab Alene Fenite
Abstract:
The purpose of this study is to investigate the implementation and challenges of different assessment methods for physical education class in some selected preparatory schools of kirkos sub city. The participants in this study are teachers, students, department heads and school principals from 4 selected schools. Of the total 8 schools offering in kirkos sub city 4 schools (Dandi Boru, Abiyot Kirse, Assay, and Adey Ababa) are selected by using simple random sampling techniques and from these schools all (100%) of teachers, 100% of department heads and school principals are taken as a sample as their number is manageable. From the total 2520 students, 252 (10%) of students are selected using simple random sampling. Accordingly, 13 teachers, 252 students, 4 department heads and 4 school principals are taken as a sample from the 4 selected schools purposefully. As a method of data gathering tools; questionnaire and interview are employed. To analyze the collected data, both quantitative and qualitative methods are used. The result of the study revealed that assessment in physical education does not implement properly: lack of sufficient materials, inadequate time allotment, large class size, and lack of collaboration and working together of teachers towards assessing the performance of students, absence of guidelines to assess the physical education subject, no different assessment method that is implementing on students with disabilities in line with their special need are found as major challenges in implementing the current assessment method of physical education. To overcome these problems the following recommendations have been forwarded. These are: the necessary facilities and equipment should be available; In order to make reliable, accurate, objective and relevant assessment, teachers of physical education should be familiarized with different assessment techniques; Physical education assessment guidelines should be prepared, and guidelines should include different types of assessment methods; qualified teachers should be employed, and different teaching room must be build.Keywords: assessment, challenges, equipment, guidelines, implementation, performance
Procedia PDF Downloads 2822753 Robust Numerical Solution for Flow Problems
Authors: Gregor Kosec
Abstract:
Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.Keywords: fluid flow, meshless, low Pr problem, natural convection
Procedia PDF Downloads 2342752 Role of Financial Institutions in Promoting Micro Service Enterprises with Special Reference to Hairdressing Salons
Authors: Gururaj Bhajantri
Abstract:
Financial sector is the backbone of any economy and it plays a crucial role in the mobilisation and allocation of resources. One of the main objectives of financial sector is inclusive growth. The constituents of the financial sector are banks, and financial Institutions, which mobilise the resources from the surplus sector and channelize the same to the different needful sectors in the economy. Micro Small and the Medium Enterprises sector in India cover a wide range of economic activities. These enterprises are divided on the basis of investment on equipment. The micro enterprises are divided into manufacturing and services sector. Micro Service enterprises have investment limit up to ten lakhs on equipment. Hairdresser is one who not only cuts and shaves but also provides different types of hair cut, hairstyles, trimming, hair-dye, massage, manicure, pedicure, nail services, colouring, facial, makeup application, waxing, tanning and other beauty treatments etc., hairdressing salons provide these services with the help of equipment. They need investment on equipment not more than ten lakhs. Hence, they can be considered as Micro service enterprises. Hairdressing salons require more than Rs 2.50,000 to start a moderate salon. Moreover, hairdressers are unable to access the organised finance. Still these individuals access finance from money lenders with high rate of interest to lead life. The socio economic conditions of hairdressers are not known properly. Hence, the present study brings a light on the role of financial institutions in promoting hairdressing salons. The study also focuses the socio-economic background of individuals in hairdressings salons, problems faced by them. The present study is based on primary and secondary data. Primary data collected among hairdressing salons in Davangere city. Samples selected with the help of simple random sampling techniques. Collected data analysed and interpreted with the help of simple statistical tools.Keywords: micro service enterprises, financial institutions, hairdressing salons, financial sector
Procedia PDF Downloads 2062751 On the Mathematical Modelling of Aggregative Stability of Disperse Systems
Authors: Arnold M. Brener, Lesbek Tashimov, Ablakim S. Muratov
Abstract:
The paper deals with the special model for coagulation kernels which represents new control parameters in the Smoluchowski equation for binary aggregation. On the base of the model the new approach to evaluating aggregative stability of disperse systems has been submitted. With the help of this approach the simple estimates for aggregative stability of various types of hydrophilic nano-suspensions have been obtained.Keywords: aggregative stability, coagulation kernels, disperse systems, mathematical model
Procedia PDF Downloads 3102750 Simple Assessments to Demystify Complementary Feeding: Leveraging a Successful Literacy Initiative Assessment Approach in Gujarat, India
Authors: Smriti Pahwa, Karishma Vats, Aditi Macwan, Jija Dutt, Sumukhi Vaid
Abstract:
Age approporiate complementary feeding has been stressed upon for sound young child nutrition and appropriate growth. National Infant and Young Child Feeding guidelines, policies and programs indicate cognizance of the issue taken by the country’s government, policy makers and technical experts. However, it is important that ordinary people, the caregivers of young children too understand the importance of appropriate feeding. For this, an interface might be required where ordinary people could participate in assessing the gaps in IYCF as a first step to take subsequent action. In this context an attempt was made to extrapolate a citizen led learning level survey that has been involving around 25000 ordinary citizens to reach out to 600,000 children annually for over a decade in India. Based on this philosophy of involving ordinary people in simple assessments to produce understandable actionable evidence, a rapid diet assessment tool was developed and collected from caregivers of 90 < 3year children from two urban clusters in Ahmedabad and Baroda, Gujarat. Target sample for pilot was selected after cluster census. Around half the mothers reported that they had not yet introduced water or other fluids to their < 6 month babies. However, about a third were already feeding them food other than mother’s milk. Although complementary feeding was initiated in almost all (95%) children more than 6 months old, frequency was suboptimal in 60%; in 80% cases no measure was taken to either improve energy or nutrient density; only 33% were fed protective foods; Green Leafy Vegetables consumption was negligible (1.4%). Anganwadi food was not consumed. By engaging ordinary people to generate evidence and understand the gaps, such assessments have the potential to be used to generate useful evidence for action at scale as well as locally.Keywords: citizen led, grass root engagement, IYCF (Infant and Young Child Feeding), rapid diet assessment, under nutrition
Procedia PDF Downloads 173