Search results for: requirement modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2746

Search results for: requirement modelling

436 Factors of Adoption of the International Financial Reporting Standard for Small and Medium Sized Entities

Authors: Uyanga Jadamba

Abstract:

Globalisation of the world economy has necessitated the development and implementation of a comparable and understandable reporting language suitable for use by all reporting entities. The International Accounting Standard Board (IASB) provides an international reporting language that lets all users understand the financial information of their business and potentially allows them to have access to finance at an international level. The study is based on logistic regression analysis to investigate the factors for the adoption of theInternational Financial Reporting Standard for Small and Medium sized Entities (IFRS for SMEs). The study started with a list of 217 countries from World Bank data. Due to the lack of availability of data, the final sample consisted of 136 countries, including 60 countries that have adopted the IFRS for SMEs and 76 countries that have not adopted it yet. As a result, the study included a period from 2010 to 2020 and obtained 1360 observations. The findings confirm that the adoption of the IFRS for SMEs is significantly related to the existence of national reporting standards, law enforcement quality, common law (legal system), and extent of disclosure. It means that the likelihood of adoption of the IFRS for SMEs decreases if the country already has a national reporting standard for SMEs, which suggests that implementation and transitional costs are relatively high in order to change the reporting standards. The result further suggests that the new standard adoption is easier in countries with constructive law enforcement and effective application of laws. The finding also shows that the adoption increases if countries have a common law system which suggests that efficient reportingregulations are more widespread in these countries. Countries with a high extent of disclosing their financial information are more likely to adopt the standard than others. The findings lastly show that the audit qualityand primary education levelhave no significant impact on the adoption.One possible explanation for this could be that accounting professionalsfrom in developing countries lacked complete knowledge of the international reporting standards even though there was a requirement to comply with them. The study contributes to the literature by providing factors that impact the adoption of the IFRS for SMEs. It helps policymakers to better understand and apply the standard to improve the transparency of financial statements. The benefit of adopting the IFRS for SMEs is significant due to the relaxed and tailored reporting requirements for SMEs, reduced burden on professionals to comply with the standard, and provided transparent financial information to gain access to finance.The results of the study are useful toemerging economies where SMEs are dominant in the economy in informing its evaluation of the adoption of the IFRS for SMEs.

Keywords: IFRS for SMEs, international financial reporting standard, adoption, institutional factors

Procedia PDF Downloads 62
435 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process

Authors: Venkata Ramanaiah, Shabina Khanam

Abstract:

Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.

Keywords: coal consumption, energy conservation, process integration, sponge iron plant

Procedia PDF Downloads 130
434 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald

Abstract:

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Keywords: finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects

Procedia PDF Downloads 311
433 The Impact of Public Finance Management on Economic Growth and Development in South Africa

Authors: Zintle Sikhunyana

Abstract:

Management of public finance in many countries such as South Africa is affected by political decisions and by policies around fiscal decentralization amongst the government spheres. Economic success is said to be determined by efficient management of public finance and by the policies or strategies that are implemented to support efficient public finance management. Policymakers focus on pay attention to how economic policies have been implemented and how they are directed into ensuring stable development. This will allow policymakers to address economic challenges through the usage of fiscal policy parameters that are linked to the achieved rate of economic growth and development. Efficient public finance management reduces the likelihood of corruption and corruption is said to have negative effects on economic growth and development. Corruption in public finance refers to an act of using funds for personal benefits. To achieve macroeconomic objectives, governments make use of government expenditure and government expenditure is financed through tax revenue. The main aim of this paper is to investigate the potential impact of public finance management on economic growth and development in South Africa. The secondary data obtained from the South African Reserve Bank (SARB) and World Bank for 1980- 2020 has been utilized to achieve the research objectives. To test the impact of public finance management on economic growth and development, the study will use Seeming Unrelated Regression Equation (SURE) Modelling that allows researchers to model multiple equations with interdependent variables. The advantages of using SUR are that it efficiently allows estimation of relationships between variables by combining information on different equations and SUR test restrictions that involve parameters in different equations. The findings have shown that there is a positive relationship between efficient public finance management and economic growth/development. The findings also show that efficient public finance management has an indirect positive impact on economic growth and development. Corruption has a negative impact on economic growth and development. It results in an efficient allocation of government resources and thereby improves economic growth and development. The study recommends that governments who aim to stimulate economic growth and development should target and strengthen public finance management policies or strategies.

Keywords: corruption, economic growth, economic development, public finance management, fiscal decentralization

Procedia PDF Downloads 183
432 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans

Authors: Rene Hellmuth

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.

Keywords: building information modeling, digital factory model, factory planning, restructuring

Procedia PDF Downloads 92
431 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 82
430 The Effects of Total Resistance Exercises Suspension Exercises Program on Physical Performance in Healthy Individuals

Authors: P. Cavlan, B. Kırmızıgil

Abstract:

Introduction: Each exercise in suspension exercises offer the use of gravity and body weight; and is thought to develop the equilibrium, flexibility and body stability necessary for daily life activities and sports, in addition to creating the correct functional force. Suspension exercises based on body weight focus the human body as an integrated system. Total Resistance Exercises (TRX) suspension training that physiotherapists, athletic health clinics, exercise centers of hospitals and chiropractic clinics now use for rehabilitation purposes. The purpose of this study is to investigate and compare the effects of TRX suspension exercises on physical performance in healthy individuals. Method: Healthy subjects divided into two groups; the study group and the control group with 40 individuals for each, between ages 20 to 45 with similar gender distributions. Study group had 2 sessions of suspension exercises per week for 8 weeks and control group had no exercises during this period. All the participants were given explosive strength, flexibility, strength and endurance tests before and after the 8 week period. The tests used for evaluation were respectively; standing long jump test and single leg (left and right) long jump tests, sit and reach test, sit up and back extension tests. Results: In the study group a statistically significant difference was found between prior- and final-tests in all evaluations, including explosive strength, flexibility, core strength and endurance of the group performing TRX exercises. These values were higher than the control groups’ values. The final test results were found to be statistically different between the study and control groups. Study group showed development in all values. Conclusions: In this study, which was conducted with the aim of investigating and comparing the effects of TRX suspension exercises on physical performance, the results of the prior-tests of both groups were similar. There was no significant difference between the prior and the final values in the control group. It was observed that in the study group, explosive strength, flexibility, strength, and endurance development was achieved after 8 weeks. According to these results, it was shown that TRX suspension exercise program improved explosive strength, flexibility, especially core strength and endurance; therefore the physical performance. Based on the results of our study, it was determined that the physical performance, an indispensable requirement of our life, was developed by the TRX suspension system. We concluded that TRX suspension exercises can be used to improve the explosive strength and flexibility in healthy individuals, as well as developing the muscle strength and endurance of the core region. The specific investigations could be done in this area so that programs that emphasize the TRX's physical performance features could be created.

Keywords: core strength, endurance, explosive strength, flexibility, physical performance, suspension exercises

Procedia PDF Downloads 151
429 2D-Numerical Modelling of Local Scour around a Circular Pier in Steady Current

Authors: Mohamed Rajab Peer Mohamed, Thiruvenkatasamy Kannabiran

Abstract:

In the present investigation, the scour around a circular pier subjected to a steady current were studied numerically using two-dimensional MIKE21 Flow Model (FM) and Sand Transport (ST)Modulewhich is developed by Danish Hydraulic Institute (DHI), Denmark. The unstructured flexible mesh generated with rectangular flume dimension of 10 m wide, 1 m deep, and 30 m long. The grain size of the sand was d50 = 0.16 mm, sediment size, sediment gradation=1.16, pier diameter D= 30 mm and depth-averaged current velocity, U = 0.449 m/s are considered in the model. The estimated scour depth obtained from this model is validated and it is observed that the results of the model have good agreement with flume experimental results.In order to estimate the scour depth, several simulations were made for three cases viz., Case I:change in sediment transport model description in the numerical model viz, i) Engelund-Hansen model, ii) Engelund-Fredsøe model, and iii) Van Rijn model, Case II: change in current velocity for keeping constant pile diameter D=0.03 m and Case III:change in pier diameter for constant depth averaged current speed U=0.449 m/s.In case I simulations, the results indicate that the scour depth S/D is the order of 1.73 for Engelund-Hansen model, 0.64 for Engelund-Fredsøe model and 0.46 for VanRijn model. The scour depth estimates using Engelund-Hansen method compares well the experimental results.In case II, simulations show that the scour depth increases with increasing current component of the flow.In case III simulations, the results indicate that the scour depth increases with increase in pier diameter and it stabilize attains steady value when the Froude number> 2.71.All the results of the numerical simulations are clearly matches with reported values of the experimental results. Hence, this MIKE21 FM –Sand Transport model can be used as a suitable tool to estimate the scour depth for field applications. Moreover, to provide suitable scour protection methods, the maximum scour depth is to be predicted, Engelund-Hansen method can be adopted to estimate the scour depth in the steady current region.

Keywords: circular pier, MIKE21, numerical model, scour, sediment transport

Procedia PDF Downloads 294
428 Association between G2677T/A MDR1 Polymorphism with the Clinical Response to Disease Modifying Anti-Rheumatic Drugs in Rheumatoid Arthritis

Authors: Alan Ruiz-Padilla, Brando Villalobos-Villalobos, Yeniley Ruiz-Noa, Claudia Mendoza-Macías, Claudia Palafox-Sánchez, Miguel Marín-Rosales, Álvaro Cruz, Rubén Rangel-Salazar

Abstract:

Introduction: In patients with rheumatoid arthritis, resistance or poor response to disease modifying antirheumatic drugs (DMARD) may be a reflection of the increase in g-P. The expression of g-P may be important in mediating the effluence of DMARD from the cell. In addition, P-glycoprotein is involved in the transport of cytokines, IL-1, IL-2 and IL-4, from normal lymphocytes activated to the surrounding extracellular matrix, thus influencing the activity of RA. The involvement of P-glycoprotein in the transmembrane transport of cytokines can serve as a modulator of the efficacy of DMARD. It was shown that a number of lymphocytes with glycoprotein P activity is increased in patients with RA; therefore, P-glycoprotein expression could be related to the activity of RA and could be a predictor of poor response to therapy. Objective: To evaluate in RA patients, if the G2677T/A MDR1 polymorphisms is associated with differences in the rate of therapeutic response to disease-modifying antirheumatic agents in patients with rheumatoid arthritis. Material and Methods: A prospective cohort study was conducted. Fifty seven patients with RA were included. They had an active disease according to DAS-28 (score >3.2). We excluded patients receiving biological agents. All the patients were followed during 6 months in order to identify the rate of therapeutic response according to the American College of Rheumatology (ACR) criteria. At the baseline peripheral blood samples were taken in order to identify the G2677T/A MDR1 polymorphisms using PCR- Specific allele. The fragment was identified by electrophoresis in polyacrylamide gels stained with ethidium bromide. For statistical analysis, the genotypic and allelic frequencies of MDR1 gene polymorphism between responders and non-responders were determined. Chi-square tests as well as, relative risks with 95% confidence intervals (95%CI) were computed to identify differences in the risk for achieving therapeutic response. Results: RA patients had a mean age of 47.33 ± 12.52 years, 87.7% were women with a mean for DAS-28 score of 6.45 ± 1.12. At the 6 months, the rate of therapeutic response was 68.7 %. The observed genotype frequencies were: for G/G 40%, T/T 32%, A/A 19%, G/T 7% and for A/A genotype 2%. Patients with G allele developed at 6 months of treatment, higher rate for therapeutic response assessed by ACR20 compared to patients with others alleles (p=0.039). Conclusions: Patients with G allele of the - G2677T/A MDR1 polymorphisms had a higher rate of therapeutic response at 6 months with DMARD. These preliminary data support the requirement for a deep evaluation of these and other genotypes as factors that may influence the therapeutic response in RA.

Keywords: pharmacogenetics, MDR1, P-glycoprotein, therapeutic response, rheumatoid arthritis

Procedia PDF Downloads 190
427 Electrodeposition of Silicon Nanoparticles Using Ionic Liquid for Energy Storage Application

Authors: Anjali Vanpariya, Priyanka Marathey, Sakshum Khanna, Roma Patel, Indrajit Mukhopadhyay

Abstract:

Silicon (Si) is a promising negative electrode material for lithium-ion batteries (LiBs) due to its low cost, non-toxicity, and a high theoretical capacity of 4200 mAhg⁻¹. The primary challenge of the application of Si-based LiBs is large volume expansion (~ 300%) during the charge-discharge process. Incorporation of graphene, carbon nanotubes (CNTs), morphological control, and nanoparticles was utilized as effective strategies to tackle volume expansion issues. However, molten salt methods can resolve the issue, but high-temperature requirement limits its application. For sustainable and practical approach, room temperature (RT) based methods are essentially required. Use of ionic liquids (ILs) for electrodeposition of Si nanostructures can possibly resolve the issue of temperature as well as greener media. In this work, electrodeposition of Si nanoparticles on gold substrate was successfully carried out in the presence of ILs media, 1-butyl-3-methylimidazolium-bis (trifluoromethyl sulfonyl) imide (BMImTf₂N) at room temperature. Cyclic voltammetry (CV) suggests the sequential reduction of Si⁴⁺ to Si²⁺ and then Si nanoparticles (SiNs). The structure and morphology of the electrodeposited SiNs were investigated by FE-SEM and observed interconnected Si nanoparticles of average particle size ⁓100-200 nm. XRD and XPS data confirm the deposition of Si on Au (111). The first discharge-charge capacity of Si anode material has been found to be 1857 and 422 mAhg⁻¹, respectively, at current density 7.8 Ag⁻¹. The irreversible capacity of the first discharge-charge process can be attributed to the solid electrolyte interface (SEI) formation via electrolyte decomposition, and trapped Li⁺ inserted into the inner pores of Si. Pulverization of SiNs results in the creation of a new active site, which facilitates the formation of new SEI in the subsequent cycles leading to fading in a specific capacity. After 20 cycles, charge-discharge profiles have been stabilized, and a reversible capacity of 150 mAhg⁻¹ is retained. Electrochemical impedance spectroscopy (EIS) data shows the decrease in Rct value from 94.7 to 47.6 kΩ after 50 cycles of charge-discharge, which demonstrates the improvements of the interfacial charge transfer kinetics. The decrease in the Warburg impedance after 50 cycles of charge-discharge measurements indicates facile diffusion in fragmented and smaller Si nanoparticles. In summary, Si nanoparticles deposited on gold substrate using ILs as media and characterized well with different analytical techniques. Synthesized material was successfully utilized for LiBs application, which is well supported by CV and EIS data.

Keywords: silicon nanoparticles, ionic liquid, electrodeposition, cyclic voltammetry, Li-ion battery

Procedia PDF Downloads 114
426 MCD-017: Potential Candidate from the Class of Nitroimidazoles to Treat Tuberculosis

Authors: Gurleen Kour, Mowkshi Khullar, B. K. Chandan, Parvinder Pal Singh, Kushalava Reddy Yumpalla, Gurunadham Munagala, Ram A. Vishwakarma, Zabeer Ahmed

Abstract:

New chemotherapeutic compounds against multidrug-resistant Mycobacterium tuberculosis (Mtb) are urgently needed to combat drug resistance in tuberculosis (TB). Apart from in-vitro potency against the target, physiochemical properties and pharmacokinetic properties play an imperative role in the process of drug discovery. We have identified novel nitroimidazole derivatives with potential activity against mycobacterium tuberculosis. One lead candidates, MCD-017, which showed potent activity against H37Rv strain (MIC=0.5µg/ml) and was further evaluated in the process of drug development. Methods: Basic physicochemical parameters like solubility and lipophilicity (LogP) were evaluated. Thermodynamic solubility was determined in PBS buffer (pH 7.4) using LC/MS-MS. The partition coefficient (Log P) of the compound was determined between octanol and phosphate buffered saline (PBS at pH 7.4) at 25°C by the microscale shake flask method. The compound followed Lipinski’s rule of five, which is predictive of good oral bioavailability and was further evaluated for metabolic stability. In-vitro metabolic stability was determined in rat liver microsomes. The hepatotoxicity of the compound was also determined in HepG2 cell line. In vivo pharmacokinetic profile of the compound after oral dosing was also obtained using balb/c mice. Results: The compound exhibited favorable solubility and lipophilicity. The physical and chemical properties of the compound were made use of as the first determination of drug-like properties. The compound obeyed Lipinski’s rule of five, with molecular weight < 500, number of hydrogen bond donors (HBD) < 5 and number of hydrogen bond acceptors(HBA) not more then 10. The log P of the compound was less than 5 and therefore the compound is predictive of exhibiting good absorption and permeation. Pooled rat liver microsomes were prepared from rat liver homogenate for measuring the metabolic stability. 99% of the compound was not metabolized and remained intact. The compound did not exhibit cytoxicity in hepG2 cells upto 40 µg/ml. The compound revealed good pharmacokinetic profile at a dose of 5mg/kg administered orally with a half life (t1/2) of 1.15 hours, Cmax of 642ng/ml, clearance of 4.84 ml/min/kg and a volume of distribution of 8.05 l/kg. Conclusion : The emergence of multi drug resistance (MDR) and extensively drug resistant (XDR) Tuberculosis emphasize the requirement of novel drugs active against tuberculosis. Thus, the need to evaluate physicochemical and pharmacokinetic properties in the early stages of drug discovery is required to reduce the attrition associated with poor drug exposure. In summary, it can be concluded that MCD-017 may be considered a good candidate for further preclinical and clinical evaluations.

Keywords: mycobacterium tuberculosis, pharmacokinetics, physicochemical properties, hepatotoxicity

Procedia PDF Downloads 441
425 Calcium Biochemical Indicators in a Group of Schoolchildren with Low Socioeconomic Status from Barranquilla, Colombia

Authors: Carmiña L. Vargas-Zapata, María A. Conde-Sarmiento, Maria Consuelo Maestre-Vargas

Abstract:

Calcium is an essential element for good growth and development of the organism, and its requirement is increased at school age. Low socio-economic populations of developing countries such as Colombia may have food deficiency of this mineral in schoolchildren that could be reflected in calcium biochemical indicators, bone alterations and anthropometric indicators. The objective of this investigation was to evaluate some calcium biochemical indicators in a group of schoolchildren of low socioeconomic level from Barranquilla city and to correlate with body mass index. 60 schoolchildren aged 7 to 15 years were selected from Jesus’s Heart Educational Institution in Barranquilla-Atlántico, apparently healthy, without suffering from infectious or gastrointestinal diseases, without habits of drinking alcohol or smoking another hallucinogenic substance and without taking supplementation with calcium in the last six months or another substance that compromises bone metabolism. The research was approved by the ethics committee at Universidad del Atlántico. The selected children were invited to donate a blood and urine sample in a fasting time of 12 hours, the serum was separated by centrifugation and frozen at ˗20 ℃ until analyzed and the same was done with the urine sample. On the day of the biological collections, the weight and height of the students were measured to determine the nutritional status by BMI using the WHO tables. Calcium concentrations in serum and urine (SCa, UCa), alkaline phosphatase activity total and of bone origin (SAPT, SBAP) and urinary creatinine (UCr) were determined by spectrophotometric methods using commercial kits. Osteocalcin and Cross-linked N-telopeptides of type I collagen (NTx-1) in serum were measured with an enzyme-linked inmunosorbent assay. For statistical analysis the Statgraphics software Centurium XVII was used. 63% (n = 38) and 37% (n = 22) of the participants were male and female, respectively. 78% (n = 47), 5% (n = 3) and 17% (n = 10) had a normal, malnutrition and high nutritional status, respectively. The averages of evaluated indicators levels were (mean ± SD): 9.50 ± 1.06 mg/dL for SCa; 181.3 ± 64.3 U/L for SAPT, 143.8 ± 73.9 U/L for SBAP; 9.0 ± 3.48 ng/mL for osteocalcin and 101.3 ± 12.8 ng/mL for NTx-1. UCa level was 12.8 ± 7.7 mg/dL that adjusted with creatinine ranged from 0.005 to 0.395 mg/mg. Considering serum calcium values, approximately 7% of school children were hypocalcemic, 16% hypercalcemic and 77% normocalcemic. The indicators evaluated did not correlate with the BMI. Low values ​​were observed in calcium urinary excretion and high in NTx-1, suggesting that mechanisms such as increase in renal retention of calcium and in bone remodeling may be contributing to calcium homeostasis.

Keywords: calcium, calcium biochemical, indicators, school children, low socioeconomic status

Procedia PDF Downloads 97
424 Effective Wind-Induced Natural Ventilation in a Residential Apartment Typology

Authors: Tanvi P. Medshinge, Prasad Vaidya, Monisha E. Royan

Abstract:

In India, cooling loads in residential sector is a major contributor to its total energy consumption. Due to the increasing cooling need, the market penetration of air-conditioners is further expected to rise. Natural Ventilation (NV), however, possesses great potential to save significant energy consumption especially for residential buildings in moderate climates. As multifamily residential apartment buildings are designed by repetitive use of prototype designs, deriving individual NV based design prototype solutions for a combination of different wind incidence angles and orientations would provide significant opportunity to address the rise in cooling loads by residential sector. This paper presents the results of NV performance of a selected prototype apartment design with a cluster of four units in Pune, India, and an attempt to improve the NV performance through design modifications. The water table apparatus, a physical modelling tool, is used to study the flow patterns and simulate wind-induced NV performance. Quantification of NV performance is done by post processing images captured from video recordings in terms of percentage of area with good and poor access to ventilation. NV performance of the existing design for eight wind incidence angles showed that of the cluster of four units, the windward units showed good access to ventilation for all rooms, and the leeward units had lower access to ventilation with the bedrooms in the leeward units having the least access. The results showed improved performance in all the units for all wind incidence angles to more than 80% good access to ventilation. Some units showed an additional improvement to more than 90% good access to ventilation. This process of design and performance evaluation improved some individual units from 0% to 100% for good access to ventilation. The results demonstrate the ease of use and the power of the water table apparatus for performance-based design to simulate wind induced NV.  

Keywords: fluid dynamics, prototype design, natural ventilation, simulations, water table apparatus, wind incidence angles

Procedia PDF Downloads 212
423 The Significance of Islamic Concept of Good Faith to Cure Flaws in Public International Law

Authors: M. A. H. Barry

Abstract:

The concept of Good faith (husn al-niyyah) and fair-dealing (Nadl) are the fundamental guiding elements in all contracts and other agreements under Islamic law. The preaching of Al-Quran and Prophet Muhammad’s (Peace Be upon Him) firmly command people to act in good faith in all dealings. There are several Quran verses and the Prophet’s saying which stressed the significance of dealing honestly and fairly in all transactions. Under the English law, the good faith is not considered a fundamental requirement for the formation of a legal contract. However, the concept of Good Faith in private contracts is recognized by the civil law system and in Article 7(1) of the Convention on International Sale of Goods (CISG-Vienna Convention-1980). It took several centuries for the international trading community to recognize the significance of the concept of good faith for the international sale of goods transactions. Nevertheless, the recognition of good faith in Civil law is only confined for the commercial contracts. Subsequently to the CISG, this concept has made inroads into the private international law. There are submissions in favour of applying the good faith concept to public international law based on tacit recognition by the international conventions and International Tribunals. However, under public international law the concept of good faith is not recognized as a source of rights or obligations. This weakens the spirit of the good faith concept, particularly when determining the international disputes. This also creates a fundamental flaw because the absence of good faith application means the breaches tainted by bad faith are tolerated. The objective of this research is to evaluate, examine and analyze the application of the concept of good faith in the modern laws and identify its limitation, in comparison with Islamic concept of good faith. This paper also identifies the problems and issues connected with the non-application of this concept to public international law. This research consists of three key components (1) the preliminary inquiry (2) subject analysis and discovery of research results, and (3) examining the challenging problems, and concluding with proposals. The preliminary inquiry is based on both the primary and secondary sources. The same sources are used for the subject analysis. This research also has both inductive and deductive features. The Islamic concept of good faith covers all situations and circumstances where the bad faith causes unfairness to the affected parties, especially the weak parties. Under the Islamic law, the concept of good faith is a source of rights and obligations as Islam prohibits any person committing wrongful or delinquent acts in any dealing whether in a private or public life. This rule is applicable not only for individuals but also for institutions, states, and international organizations. This paper explains how the unfairness is caused by non-recognition of the good faith concept as a source of rights or obligations under public international law and provides legal and non-legal reasons to show why the Islamic formulation is important.

Keywords: good faith, the civil law system, the Islamic concept, public international law

Procedia PDF Downloads 122
422 Assessment of Water Reuse Potential in a Metal Finishing Factory

Authors: Efe Gumuslu, Guclu Insel, Gülten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tuğba Olmez Hanci, Didem Okutman Tas, Fatoş Germirli Babuna, Derya Firat Ertem, Ökmen Yildirim, Özge Erturan, Betül Kirci

Abstract:

Although water reclamation and reuse are inseparable parts of sustainable production concept all around the world, current levels of reuse constitute only a small fraction of the total volume of industrial effluents. Nowadays, within the perspective of serious climate change, wastewater reclamation and reuse practices should be considered as a requirement. Industrial sector is one of the largest users of water sources. The OECD Environmental Outlook to 2050 predicts that global water demand for manufacturing will increase by 400% from 2000 to 2050 which is much larger than any other sector. Metal finishing industry is one of the industries that requires high amount of water during the manufacturing. Therefore, actions regarding the improvement of wastewater treatment and reuse should be undertaken on both economic and environmental sustainability grounds. Process wastewater can be reused for more purposes if the appropriate treatment systems are installed to treat the wastewater to the required quality level. Recent studies showed that membrane separation techniques may help in solving the problem of attaining a suitable quality of water that allows being recycled back to the process. The metal finishing factory where this study is conducted is one of the biggest white-goods manufacturers in Turkey. The sheet metal parts used in the cookers production have to be exposed to surface pre-treatment processes composed of degreasing, rinsing, nanoceramics coating and deionization rinsing processes, consecutively. The wastewater generating processes in the factory are enamel coating, painting and styrofoam processes. In the factory, the main source of water is the well water. While some part of the well water is directly used in the processes after passing through resin treatment, some portion of it is directed to the reverse osmosis treatment to obtain required water quality for enamel coating and painting processes. In addition to these processes another important source of water that can be considered as a potential water source is rainwater (3660 tons/year). In this study, process profiles as well as pollution profiles were assessed by a detailed quantitative and qualitative characterization of the wastewater sources generated in the factory. Based on the preliminary results the main water sources that can be considered for reuse in the processes were determined as painting and styrofoam processes.

Keywords: enamel coating, painting, reuse, wastewater

Procedia PDF Downloads 360
421 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 335
420 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean

Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi

Abstract:

history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.

Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami

Procedia PDF Downloads 109
419 Flow-Induced Vibration Marine Current Energy Harvesting Using a Symmetrical Balanced Pair of Pivoted Cylinders

Authors: Brad Stappenbelt

Abstract:

The phenomenon of vortex-induced vibration (VIV) for elastically restrained cylindrical structures in cross-flows is relatively well investigated. The utility of this mechanism in harvesting energy from marine current and tidal flows is however arguably still in its infancy. With relatively few moving components, a flow-induced vibration-based energy conversion device augers low complexity compared to the commonly employed turbine design. Despite the interest in this concept, a practical device has yet to emerge. It is desirable for optimal system performance to design for a very low mass or mass moment of inertia ratio. The device operating range, in particular, is maximized below the vortex-induced vibration critical point where an infinite resonant response region is realized. An unfortunate consequence of this requirement is large buoyancy forces that need to be mitigated by gravity-based, suction-caisson or anchor mooring systems. The focus of this paper is the testing of a novel VIV marine current energy harvesting configuration that utilizes a symmetrical and balanced pair of horizontal pivoted cylinders. The results of several years of experimental investigation, utilizing the University of Wollongong fluid mechanics laboratory towing tank, are analyzed and presented. A reduced velocity test range of 0 to 60 was covered across a large array of device configurations. In particular, power take-off damping ratios spanning from 0.044 to critical damping were examined in order to determine the optimal conditions and hence the maximum device energy conversion efficiency. The experiments conducted revealed acceptable energy conversion efficiencies of around 16% and desirable low flow-speed operating ranges when compared to traditional turbine technology. The potentially out-of-phase spanwise VIV cells on each arm of the device synchronized naturally as no decrease in amplitude response and comparable energy conversion efficiencies to the single cylinder arrangement were observed. In addition to the spatial design benefits related to the horizontal device orientation, the main advantage demonstrated by the current symmetrical horizontal configuration is to allow large velocity range resonant response conditions without the excessive buoyancy. The novel configuration proposed shows clear promise in overcoming many of the practical implementation issues related to flow-induced vibration marine current energy harvesting.

Keywords: flow-induced vibration, vortex-induced vibration, energy harvesting, tidal energy

Procedia PDF Downloads 133
418 Production of Bricks Using Mill Waste and Tyre Crumbs at a Low Temperature by Alkali-Activation

Authors: Zipeng Zhang, Yat C. Wong, Arul Arulrajah

Abstract:

Since automobiles became widely popular around the early 20th century, end-of-life tyres have been one of the major types of waste humans encounter. Every minute, there are considerable quantities of tyres being disposed of around the world. Most end-of-life tyres are simply landfilled or simply stockpiled, other than recycling. To address the potential issues caused by tyre waste, incorporating it into construction materials can be a possibility. This research investigated the viability of manufacturing bricks using mill waste and tyre crumb by alkali-activation at a relatively low temperature. The mill waste was extracted from a brick factory located in Melbourne, Australia, and the tyre crumbs were supplied by a local recycling company. As the main precursor, the mill waste was activated by the alkaline solution, which was comprised of sodium hydroxide (8m) and sodium silicate (liquid). The introduction ratio of alkaline solution (relative to the solid weight) and the weight ratio between sodium hydroxide and sodium silicate was fixed at 20 wt.% and 1:1, respectively. The tyre crumb was introduced to substitute part of the mill waste at four ratios by weight, namely 0, 5, 10 and 15%. The mixture of mill waste and tyre crumbs were firstly dry-mixed for 2 min to ensure the homogeneity, followed by a 2.5-min wet mixing after adding the solution. The ready mixture subsequently was press-moulded into blocks with the size of 109 mm in length, 112.5 mm in width and 76 mm in height. The blocks were cured at 50°C with 95% relative humidity for 2 days, followed by a 110°C oven-curing for 1 day. All the samples were then placed under the ambient environment until the age of 7 and 28 days for testing. A series of tests were conducted to evaluate the linear shrinkage, compressive strength and water absorption of the samples. In addition, the microstructure of the samples was examined via the scanning electron microscope (SEM) test. The results showed the highest compressive strength was 17.6 MPa, found in the 28-day-old group using 5 wt.% tyre crumbs. Such strength has been able to satisfy the requirement of ASTM C67. However, the increasing addition of tyre crumb weakened the compressive strength of samples. Apart from the strength, the linear shrinkage and water absorption of all the groups can meet the requirements of the standard. It is worth noting that the use of tyre crumbs tended to decrease the shrinkage and even caused expansion when the tyre content was over 15 wt.%. The research also found that there was a significant reduction in compressive strength for the samples after water absorption tests. In conclusion, the tyre crumbs have the potential to be used as a filler material in brick manufacturing, but more research needs to be done to tackle the durability problem in the future.

Keywords: bricks, mill waste, tyre crumbs, waste recycling

Procedia PDF Downloads 107
417 Hybrid Nanostructures of Acrylonitrile Copolymers

Authors: A. Sezai Sarac

Abstract:

Acrylonitrile (AN) copolymers with typical comonomers of vinyl acetate (VAc) or methyl acrylate (MA) exhibit better mechanical behaviors than its homopolymer. To increase processability of conjugated polymer, and to obtain a hybrid nano-structure multi-stepped emulsion polymerization was applied. Such products could be used in, i.e., drug-delivery systems, biosensors, gas-sensors, electronic compounds, etc. Incorporation of a number of flexible comonomers weakens the dipolar interactions among CN and thereby decreases melting point or increases decomposition temperatures of the PAN based copolymers. Hence, it is important to consider the effect of comonomer on the properties of PAN-based copolymers. Acrylonitrile vinylacetate (AN–VAc ) copolymers have the significant effect to their thermal behavior and are also of interest as precursors in the production of high strength carbon fibers. AN is copolymerized with one or two comonomers, particularly with vinyl acetate The copolymer of AN and VAc can be used either as a plastic (VAc > 15 wt %) or as microfibers (VAc < 15 wt %). AN provides the copolymer with good processability, electrochemical and thermal stability; VAc provides the mechanical stability. The free radical copolymerization of AN and VAc copolymer and core Shell structure of polyprrole composites,and nanofibers of poly(m-anthranilic acid)/polyacrylonitrile blends were recently studied. Free radical copolymerization of acrylonitrile (AN) – with different comonomers, i.e. acrylates, and styrene was realized using ammonium persulfate (APS) in the presence of a surfactant and in-situ polymerization of conjugated polymers was performed in this reaction medium to obtain core-shell nano particles. Nanofibers of such nanoparticles were obtained by electrospinning. Morphological properties of nanofibers are investigated by scanning electron microscopy (SEM) and atomic force spectroscopy (AFM). Nanofibers are characterized using Fourier Transform Infrared - Attenuated Total Reflectance spectrometer (FTIR-ATR), Nuclear Magnetic Resonance Spectroscopy (1H-NMR), differential scanning calorimeter (DSC), thermal gravimetric analysis (TGA), and Electrochemical Impedance Spectroscopy. The electrochemical Impedance results of the nanofibers were fitted to an equivalent curcuit by modelling (ECM).

Keywords: core shell nanoparticles, nanofibers, ascrylonitile copolymers, hybrid nanostructures

Procedia PDF Downloads 369
416 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector

Authors: Rawan K. Al-Duaij, Salem A. Al-Salem

Abstract:

Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.

Keywords: adjustment orders, data analysis, oil and gas sector, systems management

Procedia PDF Downloads 142
415 The Effect of Data Integration to the Smart City

Authors: Richard Byrne, Emma Mulliner

Abstract:

Smart cities are a vision for the future that is increasingly becoming a reality. While a key concept of the smart city is the ability to capture, communicate, and process data that has long been produced through day-to-day activities of the city, much of the assessment models in place neglect this fact to focus on ‘smartness’ concepts. Although it is true technology often provides the opportunity to capture and communicate data in more effective ways, there are also human processes involved that are just as important. The growing importance with regards to the use and ownership of data in society can be seen by all with companies such as Facebook and Google increasingly coming under the microscope, however, why is the same scrutiny not applied to cities? The research area is therefore of great importance to the future of our cities here and now, while the findings will be of just as great importance to our children in the future. This research aims to understand the influence data is having on organisations operating throughout the smart cities sector and employs a mixed-method research approach in order to best answer the following question: Would a data-based evaluation model for smart cities be more appropriate than a smart-based model in assessing the development of the smart city? A fully comprehensive literature review concluded that there was a requirement for a data-driven assessment model for smart cities. This was followed by a documentary analysis to understand the root source of data integration to the smart city. A content analysis of city data platforms enquired as to the alternative approaches employed by cities throughout the UK and draws on best practice from New York to compare and contrast. Grounded in theory, the research findings to this point formulated a qualitative analysis framework comprised of: the changing environment influenced by data, the value of data in the smart city, the data ecosystem of the smart city and organisational response to the data orientated environment. The framework was applied to analyse primary data collected through the form of interviews with both public and private organisations operating throughout the smart cities sector. The work to date represents the first stage of data collection that will be built upon by a quantitative research investigation into the feasibility of data network effects in the smart city. An analysis into the benefits of data interoperability supporting services to the smart city in the areas of health and transport will conclude the research to achieve the aim of inductively forming a framework that can be applied to future smart city policy. To conclude, the research recognises the influence of technological perspectives in the development of smart cities to date and highlights this as a challenge to introduce theory applied with a planning dimension. The primary researcher has utilised their experience working in the public sector throughout the investigation to reflect upon what is perceived as a gap in practice of where we are today, to where we need to be tomorrow.

Keywords: data, planning, policy development, smart cities

Procedia PDF Downloads 297
414 An Analysis of Possible Implications of Patent Term Extension in Pharmaceutical Sector on Indian Consumers

Authors: Anandkumar Rshindhe

Abstract:

Patents are considered as good monopoly in India. It is a mechanism by which the inventor is encouraged to do invention and also to make available to the society at large with a new useful technology. Patent system does not provide any protection to the invention itself but to the claims (rights) which the patentee has identified in relation to his invention. Thus the patentee is granted monopoly to the extent of his recognition of his own rights in the form of utilities and all other utilities of invention are for the public. Thus we find both benefit to the inventor and the public at large that is the ultimate consumer. But developing any such technology is not free of cost. Inventors do a lot of investment in the coming out with a new technologies. One such example if of Pharmaceutical industries. These pharmaceutical Industries do lot of research and invest lot of money, time and labour in coming out with these invention. Once invention is done or process identified, in order to protect it, inventors approach Patent system to protect their rights in the form of claim over invention. The patent system takes its own time in giving recognition to the invention as patent. Even after the grant of patent the pharmaceutical companies need to comply with many other legal formalities to launch it as a drug (medicine) in market. Thus major portion in patent term is unproductive to patentee and whatever limited period the patentee gets would be not sufficient to recover the cost involved in invention and as a result price of patented product is raised very much, just to recover the cost of invent. This is ultimately a burden on consumer who is paying more only because the legislature has failed to provide for the delay and loss caused to patentee. This problem can be effectively remedied if Patent Term extension is done. Due to patent term extension, the inventor gets some more time in recovering the cost of invention. Thus the end product is much more cheaper compared to non patent term extension.The basic question here arises is that when the patent period granted to a patentee is only 20 years and out of which a major portion is spent in complying with necessary legal formalities before making the medicine available in market, does the company with the limited period of monopoly recover its investment made for doing research. Further the Indian patent Act has certain provisions making it mandatory on the part of patentee to make its patented invention at reasonable affordable price in India. In the light of above questions whether extending the term of patent would be a proper solution and a necessary requirement to protect the interest of patentee as well as the ultimate consumer. The basic objective of this paper would be to check the implications of Extending the Patent term on Indian Consumers. Whether it provides the benefits to the patentee, consumer or a hardship to the Generic industry and consumer.

Keywords: patent term extention, consumer interest, generic drug industry, pharmaceutical industries

Procedia PDF Downloads 436
413 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions

Authors: Aidan Battison, Neliswa Mama

Abstract:

Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.

Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole

Procedia PDF Downloads 139
412 A Next Generation Multi-Scale Modeling Theatre for in silico Oncology

Authors: Safee Chaudhary, Mahnoor Naseer Gondal, Hira Anees Awan, Abdul Rehman, Ammar Arif, Risham Hussain, Huma Khawar, Zainab Arshad, Muhammad Faizyab Ali Chaudhary, Waleed Ahmed, Muhammad Umer Sultan, Bibi Amina, Salaar Khan, Muhammad Moaz Ahmad, Osama Shiraz Shah, Hadia Hameed, Muhammad Farooq Ahmad Butt, Muhammad Ahmad, Sameer Ahmed, Fayyaz Ahmed, Omer Ishaq, Waqar Nabi, Wim Vanderbauwhede, Bilal Wajid, Huma Shehwana, Muhammad Tariq, Amir Faisal

Abstract:

Cancer is a manifestation of multifactorial deregulations in biomolecular pathways. These deregulations arise from the complex multi-scale interplay between cellular and extracellular factors. Such multifactorial aberrations at gene, protein, and extracellular scales need to be investigated systematically towards decoding the underlying mechanisms and orchestrating therapeutic interventions for patient treatment. In this work, we propose ‘TISON’, a next-generation web-based multiscale modeling platform for clinical systems oncology. TISON’s unique modeling abstraction allows a seamless coupling of information from biomolecular networks, cell decision circuits, extra-cellular environments, and tissue geometries. The platform can undertake multiscale sensitivity analysis towards in silico biomarker identification and drug evaluation on cellular phenotypes in user-defined tissue geometries. Furthermore, integration of cancer expression databases such as The Cancer Genome Atlas (TCGA) and Human Proteome Atlas (HPA) facilitates in the development of personalized therapeutics. TISON is the next-evolution of multiscale cancer modeling and simulation platforms and provides a ‘zero-code’ model development, simulation, and analysis environment for application in clinical settings.

Keywords: systems oncology, cancer systems biology, cancer therapeutics, personalized therapeutics, cancer modelling

Procedia PDF Downloads 199
411 Customized Temperature Sensors for Sustainable Home Appliances

Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy

Abstract:

Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.

Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency

Procedia PDF Downloads 55
410 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 132
409 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry

Authors: S. McLean, J. A. Scott

Abstract:

The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.

Keywords: environment, heat recovery, mining engineering, sustainability

Procedia PDF Downloads 96
408 The Renewed Constitutional Roots of Agricultural Law in Hungary in Line with Sustainability

Authors: Gergely Horvath

Abstract:

The study analyzes the special provisions of the highest level of national agricultural legislation in the Fundamental Law of Hungary (25 April 2011) with descriptive, analytic and comparative methods. The agriculturally relevant articles of the constitution are very important, because –in spite of their high level of abstraction– they can determine and serve the practice comprehensively and effectively. That is why the objective of the research is to interpret the concrete sentences and phrases in connection with agriculture compared with the methods of some other relevant constitutions (historical-grammatical interpretation). The major findings of the study focus on searching for the appropriate provisions and approach capable of solving the problems of sustainable food production. The real challenge agricultural law must face with in the future is protecting or conserving its background and subjects: the environment, the ecosystem services and all the 'roots' of food production. In effect, agricultural law is the legal aspect of the production of 'our daily bread' from farm to table. However, it also must guarantee the safe daily food for our children and for all our descendants. In connection with sustainability, this unique, value-oriented constitution of an agrarian country even deals with uncustomary questions in this level of legislation like GMOs (by banning the production of genetically modified crops). The starting point is that the principle of public good (principium boni communis) must be the leading notion of the norm, which is an idea partly outside the law. The public interest is reflected by the agricultural law mainly in the concept of public health (in connection with food security) and the security of supply with healthy food. The construed Article P claims the general protection of our natural resources as a requirement. The enumeration of the specific natural resources 'which all form part of the common national heritage' also means the conservation of the grounds of sustainable agriculture. The reference of the arable land represents the subfield of law of the protection of land (and soil conservation), that of the water resources represents the subfield of water protection, the reference of forests and the biological diversity visualize the specialty of nature conservation, which is an essential support for agrobiodiversity. The mentioned protected objects constituting the nation's common heritage metonymically melt with their protective regimes, strengthening them and forming constitutional references of law. This regimes also mean the protection of the natural foundations of the life of the living and also the future generations, in the name of intra- and intergenerational equity.

Keywords: agricultural law, constitutional values, natural resources, sustainability

Procedia PDF Downloads 153
407 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 216