Search results for: optimal digital signal processing
919 Minding the Gap: Consumer Contracts in the Age of Online Information Flow
Authors: Samuel I. Becher, Tal Z. Zarsky
Abstract:
The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior
Procedia PDF Downloads 199918 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model
Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck
Abstract:
Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure
Procedia PDF Downloads 249917 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 648916 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 241915 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 192914 The Effect of Music on Consumer Behavior
Authors: Lara Ann Türeli, Özlem Bozkurt
Abstract:
There is a biochemical component to listening to music. The type of music listened to can lead to different levels of neurotransmitter and biochemical activity within the brain, resulting in brain stimulation and different moods. Therefore, music plays an important role in neuromarketing and consumer behavior. The quality of a commercial can be measured by the effect the music has on its audience. Thus, understanding how music can affect the brain can provide better marketing strategies for all businesses. The type of music used plays an important role in how a person responds to certain experiences. In the context of marketing and consumer behavior, music can determine whether a person will be intrigued to buy something. Depending on the type of music listened to by an individual; the music may trigger the release of pleasurable neurotransmitters such as dopamine. Dopamine is a neurotransmitter that plays an important role in reward pathways in the brain. When an individual experiences a pleasurable activity, increased levels of dopamine are produced, eventually leading to the formation of new reward pathways. Consequently, the increased dopamine activity within the brain triggered by music can result in new reward pathways along the dopamine pathways in the brain. Selecting pleasurable music for commercials can result in long-term brain stimulation, increasing consumerism. The effect of music on consumerism should be considered not only in commercials but also in the atmosphere it creates within stores. The type of music played in a store can affect consumer behavior and intention. Specifically, the rhythm, pitch, and pace of music can contribute to the mood of the song. The background music in a store can determine the consumer’s emotional presence and consequently affect their intentions. In conclusion, understanding the physiological, psychological, and neurochemical basis of the effect of music on brain stimulation is essential to understand consumer behavior. The role of dopamine in the formation of reward pathways as a result of music directly contributes to consumer behavior and the tendency of a commercial or store to leave a long-term effect on the consumer. The careful consideration of the pitch, pace, and rhythm of a song in the selection of music can not only help companies predict the behavior of a consumer but also determine the behavior of a consumer.Keywords: sensory processing, neuropsychology, dopamine, neuromarketing
Procedia PDF Downloads 81913 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada
Authors: S. Chowdhury, A. Corlett
Abstract:
Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation
Procedia PDF Downloads 132912 Anthelmintic Property of Pomegranate Peel Aqueous Extraction Against Ascaris Suum: An In-vitro Analysis
Authors: Edison Ramos, John Peter V. Dacanay, Milwida Josefa Villanueva
Abstract:
Soil-Transmitted Helminth (STH) infections caused by helminths are the most prevalent neglected tropical diseases (NTDs). They are commonly found in warm, humid regions and developing countries, particularly in rural areas with poor hygiene. Occasionally, human hosts exposed to pig manure may harbor Ascaris suum parasites without experiencing any symptoms. To address the significant issue of helminth infections, an effective anthelmintic is necessary. However, the effectiveness of various medications as anthelmintics can be reduced due to mutations. In recent years, there has been a growing interest in using plants as a source of medicine due to their natural origin, accessibility, affordability, and potential lack of complications. Herbal medicine has been advocated as an alternative treatment for helminth infections, especially in underdeveloped countries, considering the numerous adverse effects and drug resistance associated with commercially available anthelmintics. Medicinal plants are considered suitable replacements for current anthelmintics due to their historical usage in treating helminth infections. The objective of this research was to investigate the effects of aqueous extraction of pomegranate peel (Punica granatum L.) as an anthelmintic on female Ascaris suum in vitro. The in vitro assay involved observing the motility of Ascaris suum in different concentrations (25%, 50%, 75%, and 100%) of pomegranate peel aqueous extraction, along with mebendazole as a positive control. The results indicated that as the concentration of the extract increased, the time required to paralyze the worms decreased. At 25% concentration, the average time for paralysis was 362.0 minutes, which decreased to 181.0 minutes at 50% concentration, 122.7 minutes at 75% concentration, and 90.0 minutes at 100% concentration. The time of death for the worms was directly proportional to the concentration of the pomegranate peel extract. Death was observed at an average time of 240.7 minutes at 75% concentration and 147.7 minutes at 100% concentration. The findings suggest that as the concentration of pomegranate peel extract increases, the time required for paralysis and death of Ascaris suum decreases. This indicates a concentration-dependent relationship, where higher concentrations of the extract exhibit greater effectiveness in inducing paralysis and causing the death of the worms. These results emphasize the potential anthelmintic properties of pomegranate peel extract and its ability to effectively combat Ascaris suum infestations. There was no significant difference in the anthelmintic effectiveness between the pomegranate peel extract and Mebendazole. These findings highlight the potential of pomegranate peel extract as an alternative anthelmintic treatment for Ascaris suum infections. The researchers recommend determining the optimal dose and administration route to maximize the effectiveness of pomegranate peel as an anthelmintic therapeutic against Ascaris suum.Keywords: pomegranate peel, aqueous extract, anthelmintic, in vitro
Procedia PDF Downloads 115911 The Effect of Positional Release Technique versus Kinesio Tape on Iliocostalis lumborum in Back Myofascial Pain Syndrome
Authors: Shams Khaled Abdelrahman Abdallah Elbaz, Alaa Aldeen Abd Al Hakeem Balbaa
Abstract:
Purpose: The purpose of this study was to compare the effects of Positional Release Technique versus Kinesio Tape on pain level, pressure pain threshold level and functional disability in patients with back myofascial pain syndrome at iliocostalis lumborum. Backgrounds/significance: Myofascial Pain Syndrome is a common muscular pain syndrome that arises from trigger points which are hyperirritable, painful and tender points within a taut band of skeletal muscle. In more recent literature, about 75% of patients with musculoskeletal pain presenting to a community medical centres suffer from myofascial pain syndrome.Iliocostalis lumborum are most likely to develop active trigger points. Subjects: Thirty patients diagnosed as back myofascial pain syndrome with active trigger points in iliocostalis lumborum muscle bilaterally had participated in this study. Methods and materials: Patients were randomly distributed into two groups. The first group consisted of 15 patients (8 males and 7 females) with mean age 30.6 (±3.08) years, they received positional release technique which was applied 3 times per session, 3/week every other day for 2 weeks. The second group consisted of 15 patients(5 males, 10 females) with a mean age 30.4 (±3.35) years, they received kinesio tape which was applied and changed every 3 days with one day off for a total 3 times in 2 weeks. Both techniques were applied over trigger points of the iliocostalis lumborum bilaterally. Patients were evaluated pretreatment and posttreatment program for Pain intensity (Visual analogue scale), pressure pain threshold (digital pressure algometry), and functional disability (The Oswestry Disability Index). Analyses: Repeated measures MANOVA was used to detect differences within and between groups pre and post treatment. Then the univariate ANOVA test was conducted for the analysis of each dependant variable within and between groups. All statistical analyses were done using SPSS. with significance level set at p<0.05 throughout all analyses. Results: The results revealed that there was no significant difference between positional release technique and kinesio tape technique on pain level, pressure pain threshold and functional activities (p > 0.05). Both groups of patients showed significant improvement in all the measured variables (p < 0.05) evident by significant reduction of both pain intensity and functional disability as well as significant increase of pressure pain threshold Conclusions : Both positional release technique and kinesio taping technique are effective in reducing pain level, improving pressure pain threshold and improving function in treating patients who suffering from back myofascial pain syndrome at iliocostalis lumborum. As there was no statistically significant difference was proven between both of them.Keywords: positional release technique, kinesio tape, myofascial pain syndrome, Iliocostalis lumborum
Procedia PDF Downloads 232910 Studies of Single Nucleotide Polymorphism of Proteosomal Gene Complex and Their Association with HBV Infection Risk in India
Authors: Jasbir Singh, Devender Kumar, Davender Redhu, Surender Kumar, Vandana Bhardwaj
Abstract:
Single Nucleotide polymorphism (SNP) of proteosomal gene complex is involved in the pathogenesis of hepatitis B Virus (HBV) infection. Some of such proteosomal gene complex are large multifunctional proteins (LMP) and antigen associated transporters that help in antigen presentation. Both are involved in intracellular processing and presentation of viral antigens in association with Major Histocompatability Complex (MHC) Class I molecules. A total of hundred each of hepatitis B virus infected and control samples from northern India were studied. Genomic DNA was extracted from all studied samples and PCR-RFLP method was used for genotyping at different positions of LMP genes. Genotypes at a given position were inferred from the pattern of bands and genotype frequencies and haplotype frequencies were also calculated. Homozygous SNP {A>C} was observed at codon 145 of LMP7 gene and having a protective role against HBV as there was statistically significant high distribution of this SNP among controls than cases. Heterozygous SNP {A>C} was observed at codon 145 of LMP7 gene and made individuals more susceptible to HBV infection as there was statistically significant high distribution of this SNP among cases than control. SNP {T>C} was observed at codon 60 of LMP2 gene but statistically significant differences were not observed among controls and cases. For codon 145 of LMP7 and codon 60 of LMP2 genes, four haplotypes were constructed. Haplotype I (LMP2 ‘C’ and LMP7 ‘A’) made individuals carrying it more susceptible to HBV infection as there was statistically significant high distribution of this haplotype among cases than control. Haplotype II (LMP2 ‘C’ and LMP7 ‘C’) made individuals carrying it more immune to HBV infection as there was statistically significant high distribution of this haplotype among control than cases. Thus it can be concluded that homozygous SNP {A>C} at codon 145 of LMP7 and Haplotype II (LMP2 ‘C’ and LMP7 ‘C’) has a protective role against HBV infection whereas heterozygous SNP {A>C} at codon 145 of LMP7 and Haplotype I (LMP2 ‘C’ and LMP7 ‘A’) made individuals more susceptible to HBV infection.Keywords: Hepatitis B Virus, single nucleotide polymorphism, low molecular weight proteins, transporters associated with antigen presentation
Procedia PDF Downloads 308909 Language Choice and Language Maintenance of Northeastern Thai Staff in Suan Sunandha Rajabhat University
Authors: Napasri Suwanajote
Abstract:
The purposes of this research were to analyze and evaluate successful factors in OTOP production process for the developing of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The research has been designed as a qualitative study to gather information from 30 OTOP producers in Bangkontee District, Samudsongkram Province. They were all interviewed on 3 main parts. Part 1 was about the production process including 1) production, 2) product development, 3) the community strength, 4) marketing possibility, and 5) product quality. Part 2 evaluated appropriate successful factors including 1) the analysis of the successful factors, 2) evaluate the strategy based on Sufficiency Economic Philosophy, and 3) the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The results showed that the production did not affect the environment with potential in continuing standard quality production. They used the raw materials in the country. On the aspect of product and community strength in the past 1 year, it was found that there was no appropriate packaging showing product identity according to global market standard. They needed the training on packaging especially for food and drink products. On the aspect of product quality and product specification, it was found that the products were certified by the local OTOP standard. There should be a responsible organization to help the uncertified producers pass the standard. However, there was a problem on food contamination which was hazardous to the consumers. The producers should cooperate with the government sector or educational institutes involving with food processing to reach FDA standard. The results from small group discussion showed that the community expected high education and better standard living. Some problems reported by the community included informal debt and drugs in the community. There were 8 steps in developing the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality.Keywords: production process, OTOP, sufficiency economic philosophy, language choice
Procedia PDF Downloads 238908 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data
Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang
Abstract:
Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.Keywords: biomarker, congenital heart defects, DNA methylation, random forest
Procedia PDF Downloads 159907 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.Keywords: anammox, filter media, kinetics, nitrogen removal
Procedia PDF Downloads 382906 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL
Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara
Abstract:
PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.Keywords: cognition, database, PostgreSQL, text-editor, visual-editor
Procedia PDF Downloads 284905 Spatial Analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) Patients in Lagos, Nigeria
Authors: Akinsola Oluwatosin, Udofia Samuel, Odofin Mayowa
Abstract:
The study is aimed at assessing the Geographic Information System (GIS)-based spatial analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) cases for Lagos, Nigeria, with an objective to inform priority areas for public health planning and resource allocation. Multi-drug resistant tuberculosis (MDR-TB) develops due to problems such as irregular drug supply, poor drug quality, inappropriate prescription, and poor adherence to treatment. The shapefile(s) for this study were already georeferenced to Minna datum. The patient’s information was acquired on MS Excel and later converted to . CSV file for easy processing to ArcMap from various hospitals. To superimpose the patient’s information the spatial data, the addresses was geocoded to generate the longitude and latitude of the patients. The database was used for the SQL query to the various pattern of the treatment. To show the pattern of disease spread, spatial autocorrelation analysis was used. The result was displayed in a graphical format showing the areas of dispersing, random and clustered of patients in the study area. Hot and cold spot analysis was analyzed to show high-density areas. The distance between these patients and the closest health facility was examined using the buffer analysis. The result shows that 22% of the points were successfully matched, while 15% were tied. However, the result table shows that a greater percentage of it was unmatched; this is evident in the fact that most of the streets within the State are unnamed, and then again, most of the patients are likely to supply the wrong addresses. MDR-TB patients of all age groups are concentrated within Lagos-Mainland, Shomolu, Mushin, Surulere, Oshodi-Isolo, and Ifelodun LGAs. MDR-TB patients between the age group of 30-47 years had the highest number and were identified to be about 184 in number. The outcome of patients on ART treatment revealed that a high number of patients (300) were not ART treatment while a paltry 45 patients were on ART treatment. The result shows the Z-score of the distribution is greater than 1 (>2.58), which means that the distribution is highly clustered at a significance level of 0.01.Keywords: tuberculosis, patients, treatment, GIS, MDR-TB
Procedia PDF Downloads 154904 Tale of Massive Distressed Migration from Rural to Urban Areas: A Study of Mumbai City
Authors: Vidya Yadav
Abstract:
Migration is the demographic process that links rural to urban areas, generating or spurring the growth of cities. Evidence shows the role of the city as a production processes. It looks the city as a power of centre, and a centre of change. It has been observed that not only the professionals want to settle down in an urban area but rural labourers are also coming to cities for employment. These are the people who are compelled to migrate to metropolises because of lack of employment opportunities in their place of residence. However, the cities also fail to provide adequate employment because of limited job opportunity creation and capital-intensive industrialization. So these masses of incoming migrants are force to take up whatever employment absorption is available to them particularly in urban informal activities. Ultimately with this informal job they are compelled to stay in the slum areas, which is another form of deprived housing colonies. The paper seeks to examine the evidences of poverty induced migration from rural to urban areas (particularly in urban agglomeration). The present paper utilizes an abundant rich source of census migration data (D-Series) of 1991-2001. Result shows that Mumbai remain as the most attractive place to migrate. The migrants are mainly from the major states like Uttar Pradesh, Bihar, West Bengal, Jharkhand, Odisha, and Rajasthan. Male dominated migration is related mostly for employment and females due to marriages. The picture of occupational absorption of migrants who moved for employment, cross classified with educational status. Result shows that illiterate males are primarily engaged in low grade production processing work. Illiterate’s females engaged in service sectors; but these are actually very low grade services in urban informal sectors in India like maid servants, domestic help, hawkers, vendors or vegetables sellers. Among the higher educational level, a small percentage of males and females got absorbed in professional or clerical work but the percentage has been increased in the period 1991-2001.Keywords: informal, job, migration, urban
Procedia PDF Downloads 285903 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry
Authors: S. McLean, J. A. Scott
Abstract:
The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.Keywords: environment, heat recovery, mining engineering, sustainability
Procedia PDF Downloads 111902 Physico-Chemical and Microbial Changes of Organic Fertilizers after Compositing Processes under Arid Conditions
Authors: Oustani Mabrouka, Halilat Med Tahar
Abstract:
The physico-chemical properties of poultry droppings indicate that this waste can be an excellent way to enrich the soil with low fertility that is the case in arid soils (low organic matter content), but its concentrations in some microbial and chemical components make them potentially dangerous and toxic contaminants if they are used directly in fresh state. On other hand, the accumulation of plant residues in the crop areas can become a source of plant disease and affects the quality of the environment. The biotechnological processes that we have identified appear to alleviate these problems. It leads to the stabilization and processing of wastes into a product of good hygienic quality and high fertilizer value by the composting test. In this context, a trial was conducted in composting operations in the region of Ouargla located in southern Algeria. Composing test was conducted in a completely randomized design experiment. Three mixtures were prepared, in pits of 1 m3 volume for each mixture. Each pit is composed by mixture of poultry droppings and crushed plant residues in amount of 40 and 60% respectively: C1: Droppings + Straw (P.D +S) , C2: Poultry Droppings + Olive Wastes (P.D+O.W) , C3: Poultry Droppings + Date palm residues (P.D+D.P). Before and after the composting process, physico-chemical parameters (temperature, moisture, pH, electrical conductivity, total carbon and total nitrogen) were studied. The stability of the biological system was noticed after 90 days. The results of physico-chemical and microbiological compost obtained from three mixtures: C1: (P.D +S) , C2: (P.D+O.W) and C3: (P.D +D.P) shows at the end of composting process, three composts characterized by the final products were characterized by their high agronomic and environmental interest with a good physico chemical characteristics in particularly a low C/N ratio with 15.15, 10.01 and 15.36 % for (P.D + S), (P.D. + O.W) and (P.D. +D.P), respectively, reflecting a stabilization and maturity of the composts. On the other hand, a significant increase of temperature was recorded at the first days of composting for all treatments, which is correlated with a strong reduction of the pathogenic micro flora contained in poultry dropings.Keywords: Arid environment, Composting, Date palm residues, Olive wastes, pH, Pathogenic microorganisms, Poultry Droppings, Straw
Procedia PDF Downloads 236901 Additive Manufacturing with Ceramic Filler
Authors: Irsa Wolfram, Boruch Lorenz
Abstract:
Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.Keywords: additive manufacturing, ceramic composites, complex filament, industrial application
Procedia PDF Downloads 106900 Information Seeking and Evaluation Tasks to Enhance Multiliteracies in Health Education
Authors: Tuula Nygard
Abstract:
This study contributes to the pedagogical discussion on how to promote adolescents’ multiliteracies with the emphasis on information seeking and evaluation skills in contemporary media environments. The study is conducted in the school environment utilizing perspectives of educational sciences and information studies to health communication and teaching. The research focus is on the teacher role as a trusted person, who guides students to choose and use credible information sources. Evaluating the credibility of information may often be challenging. Specifically, children and adolescents may find it difficult to know what to believe and who to trust, for instance, in health and well-being communication. Thus, advanced multiliteracy skills are needed. In the school environment, trust is based on the teacher’s subject content knowledge, but also the teacher’s character and caring. Teacher’s benevolence and approachability generate trustworthiness, which lays the foundation for good interaction with students and further, for the teacher’s pedagogical authority. The study explores teachers’ perceptions of their pedagogical authority and the role of a trustee. In addition, the study examines what kind of multiliteracy practices teachers utilize in their teaching. The data will be collected by interviewing secondary school health education teachers during Spring 2019. The analysis method is a nexus analysis, which is an ethnographic research orientation. Classroom interaction as the interviewed teachers see it is scrutinized through a nexus analysis lens in order to expound a social action, where people, places, discourses, and objects are intertwined. The crucial social actions in this study are information seeking and evaluation situations, where the teacher and the students together assess the credibility of the information sources. The study is based on the hypothesis that a trustee’s opinions of credible sources and guidance in information seeking and evaluation affect students’, that is, trustors’ choices. In the school context, the teacher’s own experiences and perceptions of health-related issues cannot be brushed aside. Furthermore, adolescents are used to utilize digital technology for day-to-day information seeking, but the chosen information sources are often not very high quality. In the school, teachers are inclined to recommend familiar sources, such as health education textbook and web pages of well-known health authorities. Students, in turn, rely on the teacher’s guidance of credible information sources without using their own judgment. In terms of students’ multiliteracy competences, information seeking and evaluation tasks in health education are excellent opportunities to practice and enhance these skills. To distinguish the right information from a wrong one is particularly important in health communication because experts by experience are easy to find and their opinions are convincing. This can be addressed by employing the ideas of multiliteracy in the school subject health education and in teacher education and training.Keywords: multiliteracies, nexus analysis, pedagogical authority, trust
Procedia PDF Downloads 109899 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing
Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang
Abstract:
Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture
Procedia PDF Downloads 141898 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive
Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu
Abstract:
Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings
Procedia PDF Downloads 126897 Molecular Docking Analysis of Flavonoids Reveal Potential of Eriodictyol for Breast Cancer Treatment
Authors: Nicole C. Valdez, Vincent L. Borromeo, Conrad C. Chong, Ahmad F. Mazahery
Abstract:
Breast cancer is the most prevalent cancer worldwide, where the majority of cases are estrogen-receptor positive and involve 2 receptor proteins. The binding of estrogen to estrogen receptor alpha (ERα) promotes breast cancer growth, while it's binding to estrogen-receptor beta (ERβ) inhibits tumor growth. While natural products have been a promising source of chemotherapeutic agents, the challenge remains in finding a bioactive compound that specifically targets cancer cells, minimizing side effects on normal cells. Flavonoids are natural products that act as phytoestrogens and induce the same response as estrogen. They are able to compete with estrogen for binding to ERα; however, it has a higher binding affinity for ERβ. Their abundance in nature and low toxicity make them a potential candidate for breast cancer treatment. This study aimed to determine which particular flavonoids can specifically recognize ERβ and potentially be used for breast cancer treatment through molecular docking. A total of 206 flavonoids comprised of 97 isoflavones and 109 flavanones were collected from ZINC15, while the 3D structures of ERβ and ERα were obtained from Protein Data Bank. These flavonoid subclasses were chosen as they bind more strongly to ERs due to their chemical structure. The structures of the flavonoid ligands were converted using Open Babel, while the estrogen receptor protein structures were prepared using Autodock MGL Tools. The optimal binding site was found using BIOVIA Discovery Studio Visualizer before docking all flavonoids on both ERβ and ERα through Autodock Vina. Genistein is a flavonoid that exhibits anticancer effects by binding to ERβ, so its binding affinity was used as a baseline. Eriodictyol and 4”,6”-Di-O-Galloylprunin both exceeded genistein’s binding affinity for ERβ and was lower than its binding affinity for ERα. Of the two, eriodictyol was pursued due to its antitumor properties on a lung cancer cell line and on glioma cells. It is able to arrest the cell cycle at the G2/M phase by inhibiting the mTOR/PI3k/Akt cascade and is able to induce apoptosis via the PI3K/Akt/NF-kB pathway. Protein pathway and gene analysis were also conducted using ChEMBL and PANTHER and it was shown that eriodictyol might induce anticancer effects through the ROS1, CA7, KMO, and KDM1A genes which are involved in cell proliferation in breast cancer, non-small cell lung cancer, and other diseases. The high binding affinity of eriodictyol to ERβ, as well as its potential affected genes and antitumor effects, therefore, make it a candidate for the development of new breast cancer treatment. Verification through in vitro experiments such as checking the upregulation and downregulation of genes through qPCR and checking cell cycle arrest using a flow cytometry assay is recommended.Keywords: breast cancer, estrogen receptor, flavonoid, molecular docking
Procedia PDF Downloads 89896 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep
Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk
Abstract:
The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas
Procedia PDF Downloads 228895 Distribution of Antioxidants between Sour Cherry Juice and Pomace
Authors: Sonja Djilas, Gordana Ćetković, Jasna Čanadanović-Brunet, Vesna Tumbas Šaponjac, Slađana Stajčić, Jelena Vulić, Milica Vinčić
Abstract:
In recent years, interest in food rich in bioactive compounds, such as polyphenols, increased the advantages of the functional food products. Bioactive components help to maintain health and prevention of diseases such as cancer, cardiovascular and many other degenerative diseases. Recent research has shown that the fruit pomace, a byproduct generated from the production of juice, can be a potential source of valuable bioactive compounds. The use of fruit industrial waste in the processing of functional foods represents an important new step for the food industry. Sour cherries have considerable nutritional, medicinal, dietetic and technological value. According to the production volume of cherries, Serbia ranks seventh in the world, with a share of 7% of the total production. The use of sour cherry pomace has so far been limited to animal feed, even though it can be potentially a good source of polyphenols. For this study, local variety of sour cherry cv. ‘Feketićka’ was chosen for its more intensive taste and deeper red color, indicating high anthocyanin content. The contents of total polyphenols, flavonoids and anthocyanins, as well as radical scavenging activity on DPPH radicals and reducing power of sour cherry juice and pomace were compared using spectrophotometrical assays. According to the results obtained, 66.91% of total polyphenols, 46.77% of flavonoids, 46.77% of total anthocyanins and 47.88% of anthocyanin monomers from sour cherry fruits have been transferred to juice. On the other hand, 29.85% of total polyphenols, 33.09% of flavonoids, 53.23% of total anthocyanins and 52.12% of anthocyanin monomers remained in pomace. Regarding radical scavenging activity, 65.51% of Trolox equivalents from sour cherries were exported to juice, while 34.49% was left in pomace. However, reducing power of sour cherry juice was much stronger than pomace (91.28% and 8.72% of Trolox equivalents from sour cherry fruits, respectively). Based on our results it can be concluded that sour cherry pomace is still a rich source of natural antioxidants, especially anthocyanins with coloring capacity, therefore it can be used for dietary supplements development and food fortification.Keywords: antioxidants, polyphenols, pomace, sour cherry
Procedia PDF Downloads 325894 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 157893 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies
Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof
Abstract:
Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics
Procedia PDF Downloads 149892 Subtropical Potential Vorticity Intrusion Drives Increasing Tropospheric Ozone over the Tropical Central Pacific
Authors: Debashis Nath
Abstract:
Drawn from multiple reanalysis datasets, an increasing trend and westward shift in the number of Potential Vorticity (PV) intrusion events over the Pacific are evident. The increased frequency can be linked to a long-term trend in upper tropospheric (UT, 200 hPa) equatorial westerly wind and subtropical jets (STJ) during boreal winter to spring. These may be resulting from anomalous warming and cooling over the western Pacific warm pool and the tropical eastern Pacific, respectively. The intrusions brought dry and ozone rich air of stratospheric origin deep into the tropics. In the tropical UT, interannual ozone variability is mainly related to convection associated with El Niño/Southern Oscillation. Zonal mean stratospheric overturning circulation organizes the transport of ozone rich air poleward and downward to the high and midlatitudes leading there to higher ozone concentration. In addition to these well described mechanisms, we observe a long-term increasing trend in ozone flux over the northern hemispheric outer tropical (10–25°N) central Pacific that results from equatorward transport and downward mixing from the midlatitude UT and lower stratosphere (LS) during PV intrusions. This increase in tropospheric ozone flux over the Pacific Ocean may affect the radiative processes and changes the budget of atmospheric hydroxyl radicals. The results demonstrate a long-term increase in outer tropical Pacific PV intrusions linked with the strengthening of the upper tropospheric equatorial westerlies and weakening of the STJ. Zonal variation in SST, characterized by gradual warming in the western Pacific–warm pool and cooling in the central–eastern Pacific, is associated with the strengthening of the Pacific Walker circulation. In the Western Pacific enhanced convective activity leads to precipitation, and the latent heat released in the process strengthens the Pacific Walker circulation. However, it is linked with the trend in global mean temperature, which is related to the emerging anthropogenic greenhouse signal and negative phase of PDO. On the other hand, the central-eastern Pacific cooling trend is linked to the weakening of the central–eastern Pacific Hadley circulation. It suppresses the convective activity due to sinking air motion and imports less angular momentum to the STJ leading to a weakened STJ. While, more PV intrusions result from this weaker STJ on its equatorward side; significantly increase the stratosphere-troposphere exchange processes on the longer timescale. This plays an important role in determining the atmospheric composition, particularly of tropospheric ozone, in the northern outer tropical central Pacific. It may lead to more ozone of stratospheric origin in the LT and even in the marine boundary, which may act as harmful pollutants and affect the radiative processes by changing the global budgets of atmospheric hydroxyl radicals.Keywords: PV intrusion, westerly duct, ozone, Central Pacific
Procedia PDF Downloads 238891 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method
Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter
Abstract:
This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.Keywords: aging, eye tracking, implicit learning, visual statistical learning
Procedia PDF Downloads 78890 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 167