Search results for: flow measurement techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13342

Search results for: flow measurement techniques

1072 A Stepwise Approach for Piezoresistive Microcantilever Biosensor Optimization

Authors: Amal E. Ahmed, Levent Trabzon

Abstract:

Due to the low concentration of the analytes in biological samples, the use of Biological Microelectromechanical System (Bio-MEMS) biosensors for biomolecules detection results in a minuscule output signal that is not good enough for practical applications. In response to this, a need has arisen for an optimized biosensor capable of giving high output signal in response the detection of few analytes in the sample; the ultimate goal is being able to convert the attachment of a single biomolecule into a measurable quantity. For this purpose, MEMS microcantilevers based biosensors emerged as a promising sensing solution because it is simple, cheap, very sensitive and more importantly does not need analytes optical labeling (Label-free). Among the different microcantilever transducing techniques, piezoresistive based microcantilever biosensors became more prominent because it works well in liquid environments and has an integrated readout system. However, the design of piezoresistive microcantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. It was found that the parameters that can be optimized to enhance the sensitivity of Piezoresistive microcantilever-based sensors are: cantilever dimensions, cantilever material, cantilever shape, piezoresistor material, piezoresistor doping level, piezoresistor dimensions, piezoresistor position, Stress Concentration Region's (SCR) shape and position. After a systematic analyzation of the effect of each design and process parameters on the sensitivity, a step-wise optimization approach was developed in which almost all these parameters were variated one at each step while fixing the others to get the maximum possible sensitivity at the end. At each step, the goal was to optimize the parameter in a way that it maximizes and concentrates the stress in the piezoresistor region for the same applied force thus get the higher sensitivity. Using this approach, an optimized sensor that has 73.5x times higher electrical sensitivity (ΔR⁄R) than the starting sensor was obtained. In addition to that, this piezoresistive microcantilever biosensor it is more sensitive than the other similar sensors previously reported in the open literature. The mechanical sensitivity of the final senior is -1.5×10-8 Ω/Ω ⁄pN; which means that for each 1pN (10-10 g) biomolecules attach to this biosensor; the piezoresistor resistivity will decrease by 1.5×10-8 Ω. Throughout this work COMSOL Multiphysics 5.0, a commercial Finite Element Analysis (FEA) tool, has been used to simulate the sensor performance.

Keywords: biosensor, microcantilever, piezoresistive, stress concentration region (SCR)

Procedia PDF Downloads 569
1071 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 221
1070 Characterisation of Human Attitudes in Software Requirements Elicitation

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana

Abstract:

It is evident that there has been progress in the development and innovation of tools, techniques and methods in the development of software. Even so, there are few methodologies that include the human factor from the point of view of motivation, emotions and impact on the work environment; aspects that, when mishandled or not taken into consideration, increase the iterations in the requirements elicitation phase. This generates a broad number of changes in the characteristics of the system during its developmental process and an overinvestment of resources to obtain a final product that, often, does not live up to the expectations and needs of the client. The human factors such as emotions or personality traits are naturally associated with the process of developing software. However, most of these jobs are oriented towards the analysis of the final users of the software and do not take into consideration the emotions and motivations of the members of the development team. Given that in the industry, the strategies to select the requirements engineers and/or the analysts do not take said factors into account, it is important to identify and describe the characteristics or personality traits in order to elicit requirements effectively. This research describes the main personality traits associated with the requirements elicitation tasks through the analysis of the existing literature on the topic and a compilation of our experiences as software development project managers in the academic and productive sectors; allowing for the characterisation of a suitable profile for this job. Moreover, a psychometric test is used as an information gathering technique, and it is applied to the personnel of some local companies in the software development sector. Such information has become an important asset in order to make a comparative analysis between the degree of effectiveness in the way their software development teams are formed and the proposed profile. The results show that of the software development companies studied: 53.58% have selected the personnel for the task of requirements elicitation adequately, 37.71% possess some of the characteristics to perform the task, and 10.71% are inadequate. From the previous information, it is possible to conclude that 46.42% of the requirements engineers selected by the companies could perform other roles more adequately; a change which could improve the performance and competitiveness of the work team and, indirectly, the quality of the product developed. Likewise, the research allowed for the validation of the pertinence and usefulness of the psychometric instrument as well as the accuracy of the characteristics for the profile of requirements engineer proposed as a reference.

Keywords: emotions, human attitudes, personality traits, psychometric tests, requirements engineering

Procedia PDF Downloads 263
1069 Enhanced Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterwards, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model were considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field, is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low salinity water flooding, immiscible displacement, kashkari oil field, twophase flow, numerical reservoir simulation model

Procedia PDF Downloads 41
1068 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 49
1067 Structural Property and Mechanical Behavior of Polypropylene–Elemental Sulfur (S8) Composites: Effect of Sulfur Loading

Authors: S. Vijay Kumar, Kishore K. Jena, Saeed M. Alhassan

Abstract:

Elemental sulfur is currently produced on the level of 70 million tons annually by petroleum refining, majority of which is used in the production of sulfuric acid, fertilizer and other chemicals. Still, over 6 million tons of elemental sulfur is generated in excess, which creates exciting opportunities to develop new chemistry to utilize sulfur as a feedstock for polymers. Development of new polymer composite materials using sulfur is not widely explored and remains an important challenge in the field. Polymer nanocomposites prepared by carbon nanotube, graphene, silica and other nanomaterials were well established. However, utilization of sulfur as filler in the polymer matrix could be an interesting study. This work is to presents the possibility of utilizing elemental sulfur as reinforcing fillers in the polymer matrix. In this study we attempted to prepare polypropylene/sulfur nanocomposite. The physical, mechanical and morphological properties of the newly developed composites were studied according to the sulfur loading. In the sample preparation, four levels of elemental sulfur loading (5, 10, 20 and 30 wt. %) were designed. Composites were prepared by the melt mixing process by using laboratory scale mini twin screw extruder at 180°C for 15 min. The reaction time and temperature were maintained constant for all prepared composites. The structure and crystallization behavior of composites was investigated by Raman, FTIR, XRD and DSC analysis. It was observed that sulfur interfere with the crystalline arrangement of polypropylene and depresses the crystallization, which affects the melting point, mechanical and thermal stability. In the tensile test, one level of test temperature (room temperature) and crosshead speed (10 mm/min) was designed. Tensile strengths and tensile modulus of the composites were slightly decreased with increasing in filler loading, however, percentage of elongation improved by more than 350% compared to neat polypropylene. The effect of sulfur on the morphology of polypropylene was studied with TEM and SEM techniques. Microscope analysis revels that sulfur is homogeneously dispersed in polymer matrix and behaves as single phase arrangement in the polymer. The maximum elongation for the polypropylene can be achieved by adjusting the sulfur loading in the polymer. This study reviles the possibility of using elemental sulfur as a solid plasticizer in the polypropylene matrix.

Keywords: crystallization, elemental sulfur, morphology, thermo-mechanical properties, polypropylene, polymer nanocomposites

Procedia PDF Downloads 343
1066 Fabrication and Characteristics of Ni Doped Titania Nanotubes by Electrochemical Anodization

Authors: J. Tirano, H. Zea, C. Luhrs

Abstract:

It is well known that titanium dioxide is a semiconductor with several applications in photocatalytic process. Its band gap makes it very interesting in the photoelectrodes manufacturing used in photoelectrochemical cells for hydrogen production, a clean and environmentally friendly fuel. The synthesis of 1D titanium dioxide nanostructures, such as nanotubes, makes possible to produce more efficient photoelectrodes for solar energy to hydrogen conversion. In essence, this is because it increases the charge transport rate, decreasing recombination options. However, its principal constraint is to be mainly sensitive to UV range, which represents a very low percentage of solar radiation that reaches earth's surface. One of the alternatives to modifying the TiO2’s band gap and improving its photoactivity under visible light irradiation is to dope the nanotubes with transition metals. This option requires fabricating efficient nanostructured photoelectrodes with controlled morphology and specific properties able to offer a suitable surface area for metallic doping. Hence, currently one of the central challenges in photoelectrochemical cells is the construction of nanomaterials with a proper band position for driving the reaction while absorbing energy over the VIS spectrum. This research focuses on the synthesis and characterization of Nidoped TiO2 nanotubes for improving its photocatalytic activity in solar energy conversion applications. Initially, titanium dioxide nanotubes (TNTs) with controlled morphology were synthesized by two-step potentiostatic anodization of titanium foil. The anodization was carried out at room temperature in an electrolyte composed of ammonium fluoride, deionized water and ethylene glycol. Consequent thermal annealing of as-prepared TNTs was conducted in the air between 450 °C - 550 °C. Afterwards, the nanotubes were superficially modified by nickel deposition. Morphology and crystalline phase of the samples were carried out by SEM, EDS and XRD analysis before and after nickel deposition. Determining the photoelectrochemical performance of photoelectrodes is based on typical electrochemical characterization techniques. Also, the morphological characterization associated electrochemical behavior analysis were discussed to establish the effect of nickel nanoparticles modification on the TiO2 nanotubes. The methodology proposed in this research allows using other transition metal for nanotube surface modification.

Keywords: dimensionally stable electrode, nickel nanoparticles, photo-electrode, TiO₂ nanotubes

Procedia PDF Downloads 176
1065 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 105
1064 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: multimodal transportation, demand modeling, travel behavior, statistical models

Procedia PDF Downloads 173
1063 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 190
1062 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia

Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa

Abstract:

Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.

Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement

Procedia PDF Downloads 36
1061 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 448
1060 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 119
1059 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation

Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla

Abstract:

Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.

Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol

Procedia PDF Downloads 339
1058 “Multi-Sonic Timbre” of the Biula: The Integral Role of of Tropical Tonewood in Bajau Sama Dilaut Bowed Lute Acoustics

Authors: Wong Siew Ngan, Lee Chie Tsang, Lee See Ling, Lim Ho Yi

Abstract:

The selection of Tonewood is critical in defining tonal and acoustic qualities of string instruments, yet limited research exists on indigenous instruments utilizing tropical woods. This gap is addressed by analyzing the "multi-sonic timbre" of the Biula (Bajau Sama Dilaut), crafted by rainforest indigenous communities using locally accessible tropical species such as jackfruit and coconut, whose distinctive grain patterns, density, and moisture content, significantly contribute to the instrument’s rich harmonic spectrum and dynamic range. Unlike Western violins that utilize temperate woods like Maple and Spruce, the Biula's sound is shaped by the unique acoustic properties of these tropical tonewoods. To further investigate the impact of tropical tonewoods on the biula’s acoustics, frequency response tests were conducted on instruments constructed from various local species using SPEAR (Sinusoidal Partial Editing Analysis and Resynthesis) software for spectral analysis, measurements were taken of resonance frequencies, harmonic content, and sound decay rates. These analyses reveal that jackfruit wood produces warmer tones with enhanced lower frequencies, while coconut wood contributes to brighter timbres with pronounced higher harmonics. Building upon these findings, the materials and construction methods of biula bows were also examined. The study found that the variations in tropical hardwoods and locally sourced bow hair significantly influence the instrument's responsiveness and articulation, shaping its distinctive 'multi-sonic timbre.' These findings deepen the understanding of indigenous instrument acoustics, offering valuable insights for modern luthiers interested in tropical tonewoods. By documenting traditional crafting techniques, this research supports the preservation of cultural heritage and promotes appreciation of indigenous craftsmanship.

Keywords: multi-sonic timbre, biula (bajau sama dilaut bowed lute), tropical tonewoods, spectral analysis, indigenous instrument acoustics

Procedia PDF Downloads 7
1057 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North-West, Nigeria

Authors: Muhammad Shafi’U Adamu

Abstract:

This study examined the effects of two groups of Cognitive Behavioral Therapies (CBT) which, includes Cognitive Restructuring (CB) and Rational Emotive Behavioral Therapy (REBT), on the Psychological Distress of awaiting-trial Inmates in Correctional Centers in North-West Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centers in North-west Nigeria. 131 awaiting trial inmates from three intact Correctional Centers were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to a pilot study using Cronbach's Alpha with a reliability coefficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean scores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicated the posttreatment reduction of psychological distress in the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference in those exposed to REBT. The research recommends that a standardized structured CBT counseling technique treatment should be designed for correctional centers across Nigeria, and CBT counseling techniques could be used in the treatment of PD in both correctional and clinical settings.

Keywords: awaiting-trial inmates, cognitive restructuring, correctional centers, rational emotive behavioral therapy

Procedia PDF Downloads 74
1056 Concepts of Creation and Destruction as Cognitive Instruments in World View Study

Authors: Perizat Balkhimbekova

Abstract:

Evolutionary changes in cognitive world view taking place in the last decades are followed by changes in perception of the key concepts which are related to the certain lingua-cultural sphere. Also, such concepts reflect the person’s attitude to essential processes in the sphere of concepts, e.g. the opposite operations like creation and destruction. These changes in people’s life and thinking are displayed in a language world view. In order to open the maintenance of mental structures and concepts we should use language means as observable results of people’s cognitive activity. Semantics of words, free phrases and idioms should be considered as an authoritative source of information concerning concepts. The regularized set of concepts in people consciousness forms the sphere of concepts. Cognitive linguistics widely discusses the sphere of concepts as its crucial category defining it as the field of knowledge which is made of concepts. It is considered that a sphere of concepts comprises the various types of association and forms conceptual fields. As a material for the given research, the data from Russian National Corpus and British National Corpus were used. In is necessary to point out that data provided by computational studies, are intrinsic and verifiable; so that we have used them in order to get the reliable results. The procedure of study was based on such techniques as extracting of the context containing concepts of creation|destruction from the Russian National Corpus (RNC), and British National Corpus (BNC); analyzing and interpreting of those context on the basis of cognitive approach; finding of correspondence between the given concepts in the Russian and English world view. The key problem of our study is to find the correspondence between the elements of world view represented by opposite concepts such as creation and destruction. Findings: The concept of "destruction" indicates a process which leads to full or partial destruction of an object. In other words, it is a loss of the object primary essence: structures, properties, distinctive signs and its initial integrity. The concept of "creation", on the contrary, comprises positive characteristics, represents the activity aimed at improvement of the certain object, at the creation of ideal models of the world. On the other hand, destruction is represented much more widely in RNC than creation (1254 cases of the first concept by comparison to 192 cases for the second one). Our hypothesis consists in the antinomy represented by the aforementioned concepts. Being opposite both in respect of semantics and pragmatics, and from the point of view of axiology, they are at the same time complementary and interrelated concepts.

Keywords: creation, destruction, concept, world view

Procedia PDF Downloads 343
1055 Increasing Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterward, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model was considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low-salinity water flooding, immiscible displacement, Kashkari oil field, two-phase flow, numerical reservoir simulation model

Procedia PDF Downloads 38
1054 Process Modeling in an Aeronautics Context

Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds

Abstract:

Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.

Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE

Procedia PDF Downloads 160
1053 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization

Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries

Abstract:

Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.

Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning

Procedia PDF Downloads 266
1052 Impact of Integrated Watershed Management Programme Based on Four Waters Concept: A Case Study of Sali Village, Rajasthan State of India

Authors: Garima Sharma, R. N. Sharma

Abstract:

Integrated watershed management programme based on 'Four Water Concept' was implemented in Sali village, in Jaipur District, Rajasthan State of India . The latitude 26.7234486 North and longitude 75.023876 East are the geocoordinate of the Sali. 'Four Waters Concept' is evolved by integrating the 'Four Waters', viz. rain water, soil moisture, ground water and surface water This methodology involves various water harvesting techniques to prevent the runoff of water by treatment of catchment, proper utilization of available water harvesting structures, renovation of the non-functional water harvesting structures and creation of new water harvesting structures. The case study included questionnaire survey from farmers and continuous study of village for two years. The total project area is 6153 Hac, and the project cost is Rs. 92.25 million. The sanctioned area of Sali Micro watershed is 2228 Hac with an outlay of Rs. 10.52 million. Watershed treatment activities such as water absorption trench, continuous contour trench, field bunding, check dams, were undertaken on agricultural lands for soil and water conservation. These measures have contributed in preventing runoff and increased the perennial availability of water in wells. According to the survey, water level in open wells in the area has risen by approximately 5 metres after the introduction of water harvesting structures. The continuous availability of water in wells has increased the area under irrigation and helped in crop diversification. Watershed management activities have brought the changes in cropping patterns and crop productivity. It helped in transforming 567 Hac culturable waste land into culturable arable land in the village. The farmers of village have created an additional income from the increased crop production. The programme also assured the availability of water during peak summers for the day to day activities of villagers. The outcomes indicate that there is positive impact of watershed management practices on the water resource potential as well the crop production of the area. This suggests that persistent efforts in this direction may lead to sustainability of the watershed.

Keywords: four water concept, groundwater potential, irrigation potential, watershed management

Procedia PDF Downloads 355
1051 Lactate Biostimulation for Remediation of Aquifers Affected by Recalcitrant Sources of Chloromethanes

Authors: Diana Puigserver Cuerda, Jofre Herrero Ferran, José M. Carmona Perez

Abstract:

In the transition zone between aquifers and basal aquitards, DNAPL-pools of chlorinated solvents are more recalcitrant than at other depths in the aquifer. Although degradation of carbon tetrachloride (CT) and chloroform (CF) occurs in this zone, this is a slow process, which is why an adequate remediation strategy is necessary. The working hypothesis of this study is that the biostimulation of the transition zone of an aquifer contaminated by CT and CF can be an effective remediation strategy. This hypothesis has been tested in a site on an unconfined aquifer in which the major contaminants were CT and CF of industrial origin and where the hydrochemical background was rich in other compounds that can hinder natural attenuation of chloromethanes. Field studies and five laboratory microcosm experiments were carried out at the level of groundwater and sediments to identify: i) the degradation processes of CT and CF; ii) the structure of microbial communities; and iii) the microorganisms implicated on this degradation. For this, concentration of contaminants and co-contaminants (nitrate and sulfate), Compound Specific Isotope Analysis, molecular techniques (Denaturing Gradient Gel Electrophoresis) and clone library analysis were used. The main results were: i) degradation processes of CT and CF occurred in groundwater and in the lesser conductive sediments; ii) sulfate-reducing conditions in the transition zone were high and similar to those in the source of contamination; iii) two microorganisms (Azospira suillum and a bacterium of the Clostridiales order) were identified in the transition zone at the field and lab experiments that were compatible with the role of carrying out the reductive dechlorination of CT, CF and their degradation products (dichloromethane and chloromethane); iv) these two microorganisms were present at the high starting concentrations of the microcosm experiments (similar to those in the source of DNAPL) and continued being present until the last day of the lactate biostimulation; and v) the lactate biostimulation gave rise to the fastest and highest degradation rates and promoted the elimination of other electron acceptors (e.g. nitrate and sulfate). All these results are evidence that lactate biostimulation can be effective in remediating the source and plume, especially in the transition zone, and highlight the environmental relevance of the treatment of contaminated transition zones in industrial contexts similar to that studied.

Keywords: Azospira suillum, lactate biostimulation of carbon tetrachloride and chloroform, reductive dechlorination, transition zone between aquifer and aquitard

Procedia PDF Downloads 175
1050 Use of Activated Carbon from Olive Stone for CO₂ Capture in Porous Mortars

Authors: A. González-Caro, A. M. Merino-Lechuga, D. Suescum-Morales, E. Fernández-Ledesma, J. R. Jiménez, J. M. Fernández-Rodríguez

Abstract:

Climate change is one of the most significant issues today. Since the 19th century, the rise in temperature has not only been due to natural change, but also to human activities, which have been the main cause of climate change, mainly due to the burning of fossil fuels such as coal, oil and gas. The boom in the construction sector in recent years is also one of the main contributors to CO₂ emissions into the atmosphere; for example, for every tonne of cement produced, 1 tonne of CO₂ is emitted into the atmosphere. Most of the research being carried out in this sector is focused on reducing the large environmental impact generated during the manufacturing process of building materials. In detail, this research focuses on the recovery of waste from olive oil mills. Spain is the world's largest producer of olive oil, and this sector generates a large amount of waste and by-products such as olive pits, “alpechín” or “alpeorujo”. This olive stone by means of a pyrosilisis process gives rise to the production of active carbon. The process causes the carbon to develop many internal spaces. This study is based on the manufacture of porous mortars with Portland cement and natural limestone sand, with an addition of 5% and 10% of activated carbon. Two curing environments were used: i) dry chamber, with a humidity of 65 ± 10% and temperature of 21 ± 2 ºC and an atmospheric CO₂ concentration (approximately 0.04%); ii) accelerated carbonation chamber, with a humidity of 65 ± 10% and temperature of 21 ± 2 ºC and an atmospheric CO₂ concentration of 5%. In addition to eliminating waste from an industry, the aim of this study is to reduce atmospheric CO₂. For this purpose, first, a physicochemical and mineralogical characterisation of all raw materials was carried out, using techniques such as fluorescence and X-ray diffraction. The particle size and specific surface area of the activated carbon were determined. Subsequently, tests were carried out on the hardened mortar, such as thermogravimetric analysis (to determine the percentage of CO₂ capture), as well as mechanical properties, density, porosity, and water absorption. It was concluded that the activated carbon acts as a sink for CO₂, causing it to be trapped inside the voids. This increases CO₂ capture by 300% with the addition of 10% activated carbon at 7 days of curing. There was an increase in compressive strength of 17.5% with the CO₂ chamber after 7 days of curing using 10% activated carbon compared to the dry chamber.

Keywords: olive stone, activated carbon, porous mortar, CO₂ capture, economy circular

Procedia PDF Downloads 60
1049 Microalgae Hydrothermal Liquefaction Process Optimization and Comprehension to Produce High Quality Biofuel

Authors: Lucie Matricon, Anne Roubaud, Geert Haarlemmer, Christophe Geantet

Abstract:

Introduction: This case discusses the management of two floor of mouth (FOM) Squamous Cell Carcinomas (SCC) not identified upon initial biopsy. Case Report: A 51 year-old male presented with right FOM erythroleukoplakia. Relevant medical history included alcoholic dependence syndrome and alcoholic liver disease. Relevant drug therapy encompassed acamprosate, folic acid, hydroxocobalamin and thiamine. The patient had a 55.5 pack-year smoking history and alcohol dependence from age 14, drinking 16 units/day. FOM incisional biopsy and histopathological analysis diagnosed Carcinoma in situ. Treatment involved wide local excision. Specimen analysis revealed two separate foci of pT1 moderately differentiated SCCs. Carcinoma staging scans revealed no pathological lymphadenopathy, no local invasion or metastasis. SCCs had been excised in completion with narrow margins. MDT discussion concluded that in view of the field changes it would be difficult to identify specific areas needing further excision, although techniques such as Lugol’s Iodine were considered. Further surgical resection, surgical neck management and sentinel lymph node biopsy was offered. The patient declined intervention, primary management involved close monitoring alongside alcohol and smoking cessation referral. Discussion: Narrow excisional margins can increase carcinoma recurrence risk. Biopsy failed to identify SCCs, despite sampling an area of clinical concern. For gross field change multiple incisional biopsies should be considered to increase chance of accurate diagnosis and appropriate treatment. Coupling of tobacco and alcohol has a synergistic effect, exponentially increasing the relative risk of oral carcinoma development. Tobacco and alcoholic control is fundamental in reducing treatment‑related side effects, recurrence risk, and second primary cancer development.

Keywords: microalgae, biofuels, hydrothermal liquefaction, biomass

Procedia PDF Downloads 131
1048 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 255
1047 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 155
1046 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks

Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe

Abstract:

The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.

Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D

Procedia PDF Downloads 497
1045 Efficient Field-Oriented Motor Control on Resource-Constrained Microcontrollers for Optimal Performance without Specialized Hardware

Authors: Nishita Jaiswal, Apoorv Mohan Satpute

Abstract:

The increasing demand for efficient, cost-effective motor control systems in the automotive industry has driven the need for advanced, highly optimized control algorithms. Field-Oriented Control (FOC) has established itself as the leading approach for motor control, offering precise and dynamic regulation of torque, speed, and position. However, as energy efficiency becomes more critical in modern applications, implementing FOC on low-power, cost-sensitive microcontrollers pose significant challenges due to the limited availability of computational and hardware resources. Currently, most solutions rely on high-performance 32-bit microcontrollers or Application-Specific Integrated Circuits (ASICs) equipped with Floating Point Units (FPUs) and Hardware Accelerated Units (HAUs). These advanced platforms enable rapid computation and simplify the execution of complex control algorithms like FOC. However, these benefits come at the expense of higher costs, increased power consumption, and added system complexity. These drawbacks limit their suitability for embedded systems with strict power and budget constraints, where achieving energy and execution efficiency without compromising performance is essential. In this paper, we present an alternative approach that utilizes optimized data representation and computation techniques on a 16-bit microcontroller without FPUs or HAUs. By carefully optimizing data point formats and employing fixed-point arithmetic, we demonstrate how the precision and computational efficiency required for FOC can be maintained in resource-constrained environments. This approach eliminates the overhead performance associated with floating-point operations and hardware acceleration, providing a more practical solution in terms of cost, scalability and improved execution time efficiency, allowing faster response in motor control applications. Furthermore, it enhances system design flexibility, making it particularly well-suited for applications that demand stringent control over power consumption and costs.

Keywords: field-oriented control, fixed-point arithmetic, floating point unit, hardware accelerator unit, motor control systems

Procedia PDF Downloads 13
1044 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 75
1043 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction

Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling

Procedia PDF Downloads 72