Search results for: complex heat exchangers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7857

Search results for: complex heat exchangers

2427 Predictive Analytics of Bike Sharing Rider Parameters

Authors: Bongs Lainjo

Abstract:

The evolution and escalation of bike-sharing programs (BSP) continue unabated. Since the sixties, many countries have introduced different models and strategies of BSP. These include variations ranging from dockless models to electronic real-time monitoring systems. Reasons for using this BSP include recreation, errands, work, etc. And there is all indication that complex, and more innovative rider-friendly systems are yet to be introduced. The objective of this paper is to analyze current variables established by different operators and streamline them identifying the most compelling ones using analytics. Given the contents of available databases, there is a lack of uniformity and common standard on what is required and what is not. Two factors appear to be common: user type (registered and unregistered, and duration of each trip). This article uses historical data provided by one operator based in the greater Washington, District of Columbia, USA area. Several variables including categorical and continuous data types were screened. Eight out of 18 were considered acceptable and significantly contribute to determining a useful and reliable predictive model. Bike-sharing systems have become popular in recent years all around the world. Although this trend has resulted in many studies on public cycling systems, there have been few previous studies on the factors influencing public bicycle travel behavior. A bike-sharing system is a computer-controlled system in which individuals can borrow bikes for a fee or free for a limited period. This study has identified unprecedented useful, and pragmatic parameters required in improving BSP ridership dynamics.

Keywords: sharing program, historical data, parameters, ridership dynamics, trip duration

Procedia PDF Downloads 123
2426 Access to the Forest Ecosystem Services: Understanding the Interaction between Livelihood Capitals and Access

Authors: Abu S. M. G. Kibria, Alison M. Behie, Robert Costanza, Colin Groves, Tracy Farrell

Abstract:

This study is aimed to understand the level of access and the influence of livelihood capitals in maintaining access and control of ecosystem services (ESS) in the Sundarbans, Bangladesh. Besides the villagers, we consider other stakeholders including the forest department, coast guard, police, merchants, pirates and villagers who ‘controlled’ or ‘maintained’ access to ESS (crab catching, shrimp fry, honey, shrimp, mixed fish, fuel wood) in this region. Villagers used human, physical, natural and social capitals to gain access to ESS. The highest level of access was observed in crab catching and the lowest was found in honey collection, both of which were done when balancing the costs and benefits of accessing one ESS against another. The outcomes of these ongoing access negotiations were determined by livelihood capitals of the households. In addition, it was often found that the certain variables could have a positive effect on one ESS and a negative effect on another. For instance, human, social and natural capitals (eldest daughter’s education and No. of livelihood group membership and) had significant positive effects on honey collection while two components of human and social capitals including ‘eldest son’s education’ and ‘severity of pirate problem’ had exactly the opposite impact. These complex interactions were also observed in access to other ESS. It thus seems that access to ESS is not anything which is provided, but rather it is achieved by using livelihood capitals. Protecting any ecosystem from over exploitation and improve wellbeing can be achieved by properly balancing the livelihood capital-access nexus.

Keywords: provisioning services, access level, livelihood capital, interaction, access gain

Procedia PDF Downloads 267
2425 Stakeholders' Engagement Process in the OBSERVE Project

Authors: Elisa Silva, Rui Lança, Fátima Farinha, Miguel José Oliveira, Manuel Duarte Pinheiro, Cátia Miguel

Abstract:

Tourism is one of the global engines of development. With good planning and management, it can be a positive force, bringing benefits to touristic destinations around the world. However, without constrains, boundaries well established and constant survey, tourism can be very harmful and induce destination’s degradation. In the interest of the tourism sector and the community it is important to develop the destination maintaining its sustainability. The OBSERVE project is an instrument for monitoring and evaluating the sustainability of the region of Algarve. Its main priority is to provide environmental, economic, social-cultural and institutional indicators to support the decision-making process towards a sustainable growth. In the pursuit of the objectives, it is being developed a digital platform where the significant indicators will be continuously updated. It is known that the successful development of a touristic region depends from the careful planning with the commitment of central and regional government, industry, services and community stakeholders. Understand the different perspectives of stakeholders is essential to engage them in the development planning. However, actual stakeholders’ engagement process is complex and not easy to accomplish. To create a consistent system of indicators designed to monitor and evaluate the sustainability performance of a touristic region it is necessary to access the local data and the consideration of the full range of values and uncertainties. This paper presents the OBSERVE project and describes the stakeholders´ engagement process highlighting the contributions, ambitions and constraints.

Keywords: sustainable tourism, stakeholders' engagement, OBSERVE project, Algarve region

Procedia PDF Downloads 153
2424 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance

Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian

Abstract:

Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.

Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation

Procedia PDF Downloads 98
2423 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores

Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter

Abstract:

Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.

Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment

Procedia PDF Downloads 117
2422 Optimising Light Conditions for Recombinant Protein Production in the Microalgal Chlamydomonas reinhardtii Chloroplast

Authors: Saskya E. Carrera P., Ben Hankamer, Melanie Oey

Abstract:

The green alga C. reinhardtii provides a platform for the cheap, scalable, and safe production of complex proteins. Despite gene expression in photosynthetic organisms being tightly regulated by light, most expression studies have analysed chloroplast recombinant protein production under constant light. Here the influence of illumination time and intensity on GFP and a GFP-PlyGBS (bacterial-lysin) fusion protein expression was investigated. The expression of both proteins was strongly influenced by the light regime (6-24 hr illumination per day), the light intensity (0-450 E m⁻²s⁻¹) and growth condition (photoautotrophic, mixotrophic and heterotrophic). Heterotrophic conditions resulted in relatively low recombinant protein yields per unit volume, despite high protein yields per cell, due to low growth rates. Mixotrophic conditions exhibited the highest yields at 6 hrs illumination at 200µE m⁻²s⁻¹ and under continuous low light illumination (13-16 mg L⁻¹ GFP and 1.2-1.6 mg L⁻¹ GFP-PlyGBS), as these conditions supported good cell growth and cellular protein yields. A ~23-fold increase in protein accumulation per cell and ~9-fold increase L⁻¹ culture was observed compared to standard constant 24 hr illumination for GFP-PlyGBS. The highest yields under photoautotrophic conditions were obtained under 9 hrs illumination (6 mg L⁻¹ GFP and 2.1 mg L⁻¹ GFP-PlyGBS). This represents a ~4-fold increase in cellular protein accumulation for GFP-PlyGBS. On a volumetric basis the highest yield was at 15 hrs illumination (~2-fold increase L⁻¹ over the constant light for GFP-PlyGBS). Optimising illumination conditions to balance growth and protein expression can thus significantly enhance overall recombinant protein production in C. reinhardtii cultures.

Keywords: chlamydomonas reinhardtii, light, mixotrophic, recombinant protein

Procedia PDF Downloads 241
2421 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 95
2420 Project Time and Quality Management during Construction

Authors: Nahed Al-Hajeri

Abstract:

Time and cost is an integral part of every construction plan and can affect each party’s contractual obligations. The performance of both time and cost are usually important to the client and contractor during the project. Almost all construction projects are experiencing time overrun. These time overruns always contributed as expensive to both client and contractor. Construction of any project inside the gathering centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. It also involves many agencies interdependent on each other like the vendors, structural and functional designers including various types of specialized engineers and it includes support of contractors and specialized contractors. This paper mainly highlights the types of construction delays due to which project suffer time and cost overrun. This paper also speaks about the delay causes and factors that contribute to the construction sequence delay for the oil and gas projects. Construction delay is supposed to be one of the repeated problems in the construction projects and it has an opposing effect on project success in terms of time, cost and quality. Some effective methods are identified to minimize delays in construction projects such as: 1. Site management and supervision, 2. Effective strategic planning, 3. Clear information and communication channel. Our research paper studies the types of delay with some real examples with statistic results and suggests solutions to overcome this problem.

Keywords: non-compensable delay, delays caused by force majeure, compensable delay, delays caused by the owner or the owner’s representative, non-excusable delay, delay caused by the contractor or the contractor’s representative, concurrent delay, delays resulting from two separate causes at the same time

Procedia PDF Downloads 234
2419 Culturable Diversity of Halophilic Bacteria in Chott Tinsilt, Algeria

Authors: Nesrine Lenchi, Salima Kebbouche-Gana, Laddada Belaid, Mohamed Lamine Khelfaoui, Mohamed Lamine Gana

Abstract:

Saline lakes are extreme hypersaline environments that are considered five to ten times saltier than seawater (150 – 300 g L-1 salt concentration). Hypersaline regions differ from each other in terms of salt concentration, chemical composition and geographical location, which determine the nature of inhabitant microorganisms. In order to explore the diversity of moderate and extreme halophiles Bacteria in Chott Tinsilt (East of Algeria), an isolation program was performed. In the first time, water samples were collected from the saltern during pre-salt harvesting phase. Salinity, pH and temperature of the sampling site were determined in situ. Chemical analysis of water sample indicated that Na +and Cl- were the most abundant ions. Isolates were obtained by plating out the samples in complex and synthetic media. In this study, seven halophiles cultures of Bacteria were isolated. Isolates were studied for Gram’s reaction, cell morphology and pigmentation. Enzymatic assays (oxidase, catalase, nitrate reductase and urease), and optimization of growth conditions were done. The results indicated that the salinity optima varied from 50 to 250 g L-1, whereas the optimum of temperature range from 25°C to 35°C. Molecular identification of the isolates was performed by sequencing the 16S rRNA gene. The results showed that these cultured isolates included members belonging to the Halomonas, Staphylococcus, Salinivibrio, Idiomarina, Halobacillus Thalassobacillus and Planococcus genera and what may represent a new bacterial genus.

Keywords: bacteria, Chott, halophilic, 16S rRNA

Procedia PDF Downloads 266
2418 A Cost Effective Approach to Develop Mid-Size Enterprise Software Adopted the Waterfall Model

Authors: Mohammad Nehal Hasnine, Md Kamrul Hasan Chayon, Md Mobasswer Rahman

Abstract:

Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.

Keywords: end-user application development, enterprise software design, information resource management, usability

Procedia PDF Downloads 424
2417 Cryptocurrency Realities: Insights from Social and Economic Psychology

Authors: Sarah Marie

Abstract:

In today's dynamic financial landscape, cryptocurrencies represent a paradigm shift characterized by innovation and intense debate. This study probes into their transformative potential and the challenges they present, offering a balanced perspective that recognizes both their promise and pitfalls. Emulating the engaging style of a TED Talk, this research goes beyond academic analysis, serving as a critical bridge to reconcile the perspectives of cryptocurrency skeptics and enthusiasts, fostering a well-informed dialogue. The study employs a mixed-method approach, analyzing current trends, regulatory landscapes, and public perceptions in the cryptocurrency domain. It distinguishes genuine innovators in this field from ostentatious opportunists, echoing the sentiment that real innovation should be separated from mere showmanship. If one is unfamiliar with who is being referenced, they can likely spot them leaning against their Lamborghinis outside "Crypto" conventions, looking greasy. Major findings reveal a complex scenario dominated by regulatory uncertainties, market volatility, and security issues, emphasizing the need for a coherent regulatory framework that balances innovation with risk management and sustainable practices. The study underscores the importance of transparency and consumer protection in fostering responsible growth within the cryptocurrency ecosystem. In conclusion, the research advocates for education, innovation, and ethical governance in the realm of cryptocurrencies. It calls for collaborative efforts to navigate the intricacies of this evolving landscape and to realize its full potential in a responsible, inclusive, and forward-thinking manner.

Keywords: financial landscape, innovation, public perception, transparency

Procedia PDF Downloads 35
2416 Systematic Discovery of Bacterial Toxins Against Plants Pathogens Fungi

Authors: Yaara Oppenheimer-Shaanan, Nimrod Nachmias, Marina Campos Rocha, Neta Schlezinger, Noam Dotan, Asaf Levy

Abstract:

Fusarium oxysporum, a fungus that attacks a broad range of plants and can cause infections in humans, operates across different kingdoms. This pathogen encounters varied conditions, such as temperature, pH, and nutrient availability, in plant and human hosts. The Fusarium oxysporum species complex, pervasive in soils globally, can affect numerous plants, including key crops like tomatoes and bananas. Controlling Fusarium infections can involve biocontrol agents that hinder the growth of harmful strains. Our research developed a computational method to identify toxin domains within a vast number of microbial genomes, leading to the discovery of nine distinct toxins capable of killing bacteria and fungi, including Fusarium. These toxins appear to function as enzymes, causing significant damage to cellular structures, membranes and DNA. We explored biological control using bacteria that produce polymorphic toxins, finding that certain bacteria, non-pathogenic to plants, offer a safe biological alternative for Fusarium management, as they did not harm macrophage cells or C. elegans. Additionally, we elucidated the 3D structures of two toxins with their protective immunity proteins, revealing their function as unique DNases. These potent toxins are likely instrumental in microbial competition within plant ecosystems and could serve as biocontrol agents to mitigate Fusarium wilt and related diseases.

Keywords: microbial toxins, antifungal, Fusarium oxysporum, bacterial-fungal intreactions

Procedia PDF Downloads 34
2415 Isolation, Purification and Characterisation of Non-Digestible Oligosaccharides Derived from Extracellular Polysaccharide of Antarctic Fungus Thelebolus Sp. IITKGP-BT12

Authors: Abinaya Balasubramanian, Satyabrata Ghosh, Satyahari Dey

Abstract:

Non-Digestible Oligosaccharides(NDOs) are low molecular weight carbohydrates with degree of polymerization (DP) 3-20, that are delivered intact to the large intestine. NDOs are gaining attention as effective prebiotic molecules that facilitate prevention and treatment of several chronic diseases. Recently, NDOs are being obtained by cleaving complex polysaccharides as it results in high yield and also as the former tend to display greater bioactivity. Thelebolus sp. IITKGP BT-12, a recently identified psychrophilic, Ascomycetes fungus has been reported to produce a bioactive extracellular polysaccharide(EPS). The EPS has been proved to possess strong prebiotic activity and anti- proliferative effects. The current study is an attempt to identify and optimise the most suitable method for hydrolysis of the above mentioned novel EPS into NDOs, and further purify and characterise the same. Among physical, chemical and enzymatic methods, enzymatic hydrolysis was identified as the best method and the optimum hydrolysis conditions obtained using response surface methodology were: reaction time of 24h, β-(1,3) endo-glucanase concentration of 0.53U and substrate concentration of 10 mg/ml. The NDOs were purified using gel filtration chromatography and their molecular weights were determined using MALDI-TOF. The major fraction was found to have a DP of 7,8. The monomeric units of the NDOs were confirmed to be glucose using TLC and GCMS-MS analysis. The obtained oligosaccharides proved to be non-digestible when subjected to gastric acidity, salivary and pancreatic amylases and hence could serve as efficient prebiotics.

Keywords: characterisation, enzymatic hydrolysis, non-digestible oligosaccharides, response surface methodology

Procedia PDF Downloads 113
2414 Influence of Specimen Geometry (10*10*40), (12*12*60) and (5*20*120), on Determination of Toughness of Concrete Measurement of Critical Stress Intensity Factor: A Comparative Study

Authors: M. Benzerara, B. Redjel, B. Kebaili

Abstract:

The cracking of the concrete is a more crucial problem with the development of the complex structures related to technological progress. The projections in the knowledge of the breaking process make it possible today for better prevention of the risk of the fracture. The breaking strength brutal of a quasi-fragile material like the concrete called Toughness is measured by a breaking value of the factor of the intensity of the constraints K1C for which the crack is propagated, it is an intrinsic property of the material. Many studies reported in the literature treating of the concrete were carried out on specimens which are in fact inadequate compared to the intrinsic characteristic to identify. We started from this established fact, in order to compare the evolution of the parameter of toughness K1C measured by calling upon ordinary concrete specimens of three prismatic geometries different (10*10*40) Cm3, (12*12*60) Cm3 & (5*20*120) Cm3 containing from the side notches various depths simulating of the cracks was set up.The notches are carried out using triangular pyramidal plates into manufactured out of sheet coated placed at the center of the specimens at the time of the casting, then withdrawn to leave the trace of a crack. The tests are carried out in 3 points bending test in mode 1 of fracture, by using the techniques of mechanical fracture. The evolution of the parameter of toughness K1C measured with the three geometries specimens gives almost the same results. They are acceptable and return in the beach of the results determined by various researchers (toughness of the ordinary concrete turns to the turn of the 1 MPa √m). These results inform us about the presence of an economy on the level of the geometry specimen (5*20*120) Cm3, therefore, to use plates specimens later if one wants to master the toughness of this material complexes, astonishing but always essential that is the concrete.

Keywords: concrete, fissure, specimen, toughness

Procedia PDF Downloads 289
2413 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics

Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh

Abstract:

In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.

Keywords: bond ball mill, population balance model, product size distribution, vertical stirred mill

Procedia PDF Downloads 279
2412 Observation of the Orthodontic Tooth's Long-Term Movement Using Stereovision System

Authors: Hao-Yuan Tseng, Chuan-Yang Chang, Ying-Hui Chen, Sheng-Che Chen, Chih-Han Chang

Abstract:

Orthodontic tooth treatment has demonstrated a high success rate in clinical studies. It has been agreed upon that orthodontic tooth movement is based on the ability of surrounding bone and periodontal ligament (PDL) to react to a mechanical stimulus with remodeling processes. However, the mechanism of the tooth movement is still unclear. Recent studies focus on the simple principle compression-tension theory while rare studies directly measure tooth movement. Therefore, tracking tooth movement information during orthodontic treatment is very important in clinical practice. The aim of this study is to investigate the mechanism responses of the tooth movement during the orthodontic treatments. A stereovision system applied to track the tooth movement of the patient with the stamp brackets. The system was established by two cameras with their relative position calibrate. And the orthodontic force measured by 3D printing model with the six-axis load cell to determine the initial force application. The result shows that the stereovision system accuracy revealed the measurement presents a maximum error less than 2%. For the study on patient tracking, the incisor moved about 0.9 mm during 60 days tracking, and half of movement occurred in the first few hours. After removing the orthodontic force in 100 hours, the distance between before and after position incisor tooth decrease 0.5 mm consisted with the release of the phenomenon. Using the stereovision system can accurately locate the three-dimensional position of the teeth and superposition of 3D coordinate system for all the data to integrate the complex tooth movement.

Keywords: orthodontic treatment, tooth movement, stereovision system, long-term tracking

Procedia PDF Downloads 407
2411 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 61
2410 Gender Stereotypes at the Court of Georgia: Perceptions of Attorneys on Gender Bias

Authors: Tatia Kekelia

Abstract:

This paper is part of an ongoing research addressing gender discrimination in the Court of Georgia. The research suggests that gender stereotypes influence the processes at the Court in contemporary Georgia, which causes uneven fights for women and men, not to mention other gender identities. The sub-hypothesis proposes that the gender stereotypes derive from feudal representations, which persisted during the Soviet rule. It is precisely those stereotypes that feed gender-based discrimination today. However, this paper’s main focus is on the main hypothesis, describing the revealed stereotypes, and identifying the Court as a place where their presence is most hindering societal development. First of all, this happens by demotivating people, causing loss of trust in the Court, and therefore potentially encouraging crime. Secondly, it becomes harder to adequately mobilize human resources, since more than a half of the population is female, and under the influence of rigid or more subtle forms of discrimination, they lose not only equal rights, but also the motivation to work or fight for them. Consequently, this paper falls under democracy studies as well – considering that an unbiased Court is one of the most important criteria for assessing the democratic character of a state. As the research crosses the disciplines of sociology, law, and history, a complex of qualitative research methods is applied, among which this paper relies mainly on expert interviews, interviews with attorneys, and desk research. By showcasing and undermining the gender stereotypes that work at the Court of Georgia, this research might assist in rising trust towards it in the long-term. As for the broader relevance, the study of the Georgian case opens the possibility to conduct comparative analyses in the region and the continent, and, presumably, carve the lines of cultural influences.

Keywords: gender, stereotypes, bias, democratization, judiciary

Procedia PDF Downloads 57
2409 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia PDF Downloads 194
2408 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making

Authors: Serhat Tuzun, Tufan Demirel

Abstract:

Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.

Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy

Procedia PDF Downloads 207
2407 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 328
2406 Performance Analysis of Vision-Based Transparent Obstacle Avoidance for Construction Robots

Authors: Siwei Chang, Heng Li, Haitao Wu, Xin Fang

Abstract:

Construction robots are receiving more and more attention as a promising solution to the manpower shortage issue in the construction industry. The development of intelligent control techniques that assist in controlling the robots to avoid transparency and reflected building obstacles is crucial for guaranteeing the adaptability and flexibility of mobile construction robots in complex construction environments. With the boom of computer vision techniques, a number of studies have proposed vision-based methods for transparent obstacle avoidance to improve operation accuracy. However, vision-based methods are also associated with disadvantages such as high computational costs. To provide better perception and value evaluation, this study aims to analyze the performance of vision-based techniques for avoiding transparent building obstacles. To achieve this, commonly used sensors, including a lidar, an ultrasonic sensor, and a USB camera, are equipped on the robotic platform to detect obstacles. A Raspberry Pi 3 computer board is employed to compute data collecting and control algorithms. The turtlebot3 burger is employed to test the programs. On-site experiments are carried out to observe the performance in terms of success rate and detection distance. Control variables include obstacle shapes and environmental conditions. The findings contribute to demonstrating how effectively vision-based obstacle avoidance strategies for transparent building obstacle avoidance and provide insights and informed knowledge when introducing computer vision techniques in the aforementioned domain.

Keywords: construction robot, obstacle avoidance, computer vision, transparent obstacle

Procedia PDF Downloads 65
2405 Chemical Fingerprinting of Complex Samples With the Aid of Parallel Outlet Flow Chromatography

Authors: Xavier A. Conlan

Abstract:

Speed of analysis is a significant limitation to current high-performance liquid chromatography/mass spectrometry (HPLC/MS) and ultra-high-pressure liquid chromatography (UHPLC)/MS systems both of which are used in many forensic investigations. The flow rate limitations of MS detection require a compromise in the chromatographic flow rate, which in turn reduces throughput, and when using modern columns, a reduction in separation efficiency. Commonly, this restriction is combated through the post-column splitting of flow prior to entry into the mass spectrometer. However, this results in a loss of sensitivity and a loss in efficiency due to the post-extra column dead volume. A new chromatographic column format known as 'parallel segmented flow' involves the splitting of eluent flow within the column outlet end fitting, and in this study we present its application in order to interrogate the provenience of methamphetamine samples with mass spectrometry detection. Using parallel segmented flow, column flow rates as high as 3 mL/min were employed in the analysis of amino acids without post-column splitting to the mass spectrometer. Furthermore, when parallel segmented flow chromatography columns were employed, the sensitivity was more than twice that of conventional systems with post-column splitting when the same volume of mobile phase was passed through the detector. These finding suggest that this type of column technology will particularly enhance the capabilities of modern LC/MS enabling both high-throughput and sensitive mass spectral detection.

Keywords: chromatography, mass spectrometry methamphetamine, parallel segmented outlet flow column, forensic sciences

Procedia PDF Downloads 475
2404 The Effects of Cooling during Baseball Games on Perceived Exertion and Core Temperature

Authors: Chih-Yang Liao

Abstract:

Baseball is usually played outdoors in the warmest months of the year. Therefore, baseball players are susceptible to the influence of the hot environment. It has been shown that hitting performance is increased in games played in warm weather, compared to in cold weather, in Major League Baseball. Intermittent cooling during sporting events can prevent the risk of hyperthermia and increase endurance performance. However, the effects of cooling during baseball games played in a hot environment are unclear. This study adopted a cross-over design. Ten Division I collegiate male baseball players in Taiwan volunteered to participate in this study. Each player played two simulated baseball games, with one day in between. Five of the players received intermittent cooling during the first simulated game, while the other five players received intermittent cooling during the second simulated game. The participants were covered in neck and forehand regions for 6 min with towels that were soaked in icy salt water 3 to 4 times during the games. The participants received the cooling treatment in the dugout when they were not on the field for defense or hitting. During the 2 simulated games, the temperature was 31.1-34.1°C and humidity was 58.2-61.8%, with no difference between the two games. Ratings of perceived exertion, thermal sensation, tympanic and forehead skin temperature immediately after each defensive half-inning and after cooling treatments were recorded. Ratings of perceived exertion were measured using the Borg 10-point scale. The thermal sensation was measured with a 6-point scale. The tympanic and skin temperature was measured with infrared thermometers. The data were analyzed with a two-way analysis of variance with repeated measurement. The results showed that intermitted cooling significantly reduced ratings of perceived exertion and thermal sensation. Forehead skin temperature was also significantly decreased after cooling treatments. However, the tympanic temperature was not significantly different between the two trials. In conclusion, intermittent cooling in the neck and forehead regions was effective in alleviating the perceived exertion and heat sensation. However, this cooling intervention did not affect the core temperature. Whether intermittent cooling has any impact on hitting or pitching performance in baseball players warrants further investigation.

Keywords: baseball, cooling, ratings of perceived exertion, thermal sensation

Procedia PDF Downloads 136
2403 Genetics of Atopic Dermatitis: Role of Cytokine Genes Polymorphisms

Authors: Ghaleb Bin Huraib

Abstract:

Atopic dermatitis (AD), also known as atopic eczema, is a chronic inflammatory skin disease characterized by severe itching and recurrent, relapsing eczema-like skin lesions, affecting up to 20% of children and 10% of adults in industrialized countries. AD is a complex multifactorial disease, and its exact etiology and pathogenesis have not been fully elucidated. The aim of this study was to investigate the impact of gene polymorphisms of T helper cell subtype Th1 and Th2 cytokines, interferon-gamma (IFN-γ), interleukin-6 (IL-6) and transforming growth factor (TGF)-β1on AD susceptibility in a Saudi cohort. One hundred four unrelated patients with AD and 195 healthy controls were genotyped for IFN-γ (874A/T), IL-6 (174G/C) and TGF-β1 (509C/T) polymorphisms using ARMS-PCR and PCR-RFLP technique. The frequency of genotypes AA and AT of IFN-γ (874A/T) differed significantly among patients and controls (P 0.001). The genotype AT was increased while genotype AA was decreased in AD patients as compared to controls. AD patients also had a higher frequency of T-containing genotypes (AT+TT) than controls (P = 0.001). The frequencies of alleles T and A were statistically different in patients and controls (P = 0.04). The frequencies of genotype GG and allele G of IL-6 (174G/C) were significantly higher, while genotype GC and allele C were lower in AD patients than in controls. There was no significant difference in the frequencies of alleles and genotypes of TGF-β1 (509C/T) polymorphism between the patient and control groups. These results showed that susceptibility to AD is influenced by the presence or absence of genotypes of IFN-γ (874A/T) and IL-6 (174G/C) polymorphisms. It is concluded T-allele and T-containing genotypes (AT+TT) of IFN-γ (874A/T) and G-allele and GG genotype ofIL-6 (174G/C) polymorphisms are susceptible to AD in Saudis. On the other hand, the TGF-β1 (509C/T) polymorphism may not be associated with AD risk in our population; however, further studies with large sample sizes are required to confirm these results.

Keywords: atopic dermatitis, Polymorphism, Interferon, IL-6

Procedia PDF Downloads 61
2402 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 131
2401 Students’ Perceptions and Attitudes for Integrating ICube Technology in the Solar System Lesson

Authors: Noran Adel Emara, Elham Ghazi Mohammad

Abstract:

Qatar University is engaged in a systemic education reform that includes integrating the latest and most effective technologies for teaching and learning. ICube is high-immersive virtual reality technology is used to teach educational scenarios that are difficult to teach in real situations. The trends toward delivering science education via virtual reality applications have accelerated in recent years. However, research on students perceptions of integrating virtual reality especially ICube technology is somehow limited. Students often have difficulties focusing attention on learning science topics that require imagination and easily lose attention and interest during the lesson. The aim of this study was to examine students’ perception of integrating ICube technology in the solar system lesson. Moreover, to explore how ICube could engage students in learning scientific concept of the solar system. The research framework included the following quantitative research design with data collection and analysis from questionnaire results. The solar system lesson was conducted by teacher candidates (Diploma students) who taught in the ICube virtual lab in Qatar University. A group of 30 students from eighth grade were randomly selected to participate in the study. Results showed that the students were extremely engaged in learning the solar system and responded positively to integrating ICube in teaching. Moreover, the students showed interest in learning more lessons through ICube as it provided them with valuable learning experience about complex situations.

Keywords: ICube, integrating technology, science education, virtual reality

Procedia PDF Downloads 286
2400 Development of a Roadmap for Assessment the Sustainability of Buildings in Saudi Arabia Using Building Information Modeling

Authors: Ibrahim A. Al-Sulaihi, Khalid S. Al-Gahtani, Abdullah M. Al-Sugair, Aref A. Abadel

Abstract:

Achieving environmental sustainability is one of the important issues considered in many countries’ vision. Green/Sustainable building is widely used terminology for describing a friendly environmental construction. Applying sustainable practices has a significant importance in various fields, including construction field that consumes an enormous amount of resource and causes a considerable amount of waste. The need for sustainability is increased in the regions that suffering from the limitation of natural resource and extreme weather conditions such as Saudi Arabia. Since buildings designs are getting sophisticated, the need for tools, which support decision-making for sustainability issues, is increasing, especially in the design and preconstruction stages. In this context, Building Information Modeling (BIM) can aid in performing complex building performance analyses to ensure an optimized sustainable building design. Accordingly, this paper introduces a roadmap towards developing a systematic approach for presenting the sustainability of buildings using BIM. The approach includes set of main processes including; identifying the sustainability parameters that can be used for sustainability assessment in Saudi Arabia, developing sustainability assessment method that fits the special circumstances in the Kingdom, identifying the sustainability requirements and BIM functions that can be used for satisfying these requirements, and integrating these requirements with identified functions. As a result, the sustainability-BIM approach can be developed which helps designers in assessing the sustainability and exploring different design alternatives at the early stage of the construction project.

Keywords: green buildings, sustainability, BIM, rating systems, environment, Saudi Arabia

Procedia PDF Downloads 370
2399 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change

Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems

Procedia PDF Downloads 47
2398 In silico Analysis of a Causative Mutation in Cadherin-23 Gene Identified in an Omani Family with Hearing Loss

Authors: Mohammed N. Al Kindi, Mazin Al Khabouri, Khalsa Al Lamki, Tommasso Pappuci, Giovani Romeo, Nadia Al Wardy

Abstract:

Hereditary hearing loss is a heterogeneous group of complex disorders with an overall incidence of one in every five hundred newborns presented as syndromic and non-syndromic forms. Cadherin-related 23 (CDH23) is one of the listed deafness causative genes. CDH23 is found to be expressed in the stereocilia of hair cells and the retina photoreceptor cells. Defective CDH23 has been associated mostly with prelingual severe-to-profound sensorineural hearing loss (SNHL) in either syndromic (USH1D) or non-syndromic SNHL (DFNB12). An Omani family diagnosed clinically with severe-profound sensorineural hearing loss was genetically analysed by whole exome sequencing technique. A novel homozygous missense variant, c.A7451C (p.D2484A), in exon 53 of CDH23 was detected. One hundred and thirty control samples were analysed where all were negative for the detected variant. The variant was analysed in silico for pathogenicity verification using several mutation prediction software. The variant proved to be a pathogenic mutation and is reported for the first time in Oman and worldwide. It is concluded that in silico mutation prediction analysis might be used as a useful molecular diagnostics tool benefiting both genetic counseling and mutation verification. The aspartic acid 2484 alanine missense substitution might be the main disease-causing mutation that damages CDH23 function and could be used as a genetic hearing loss marker for this particular Omani family.

Keywords: Cdh23, d2484a, in silico, Oman

Procedia PDF Downloads 201