Support A2Billing :

provided by Star2Billing S.L.

Support A2Billing :
It is currently Mon Oct 21, 2019 11:46 pm
Voice Broadcast System


All times are UTC




Post new topic Reply to topic  [ 28 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: DB optimisation for dialprefix
PostPosted: Mon Oct 08, 2007 9:59 am 
Offline

Joined: Thu Oct 19, 2006 9:56 am
Posts: 300
Location: Athens, Greece
-- In Postgres -- (please confirm for MySQL)

I have conducted a few experiments, where I found out that the dialprefix scan is sub-optimal:
The expression:
Code:
WHERE dialprefix = substr($dialednum, length(dialprefix))

results in a sequential scan, where each dialprefix has to be counted for length etc. Fortunately, the fact that substr(), length() are C functions saves the day. If you write them in pl-something they become even slower.
Still, that doesn't benefit from the index on dialprefix. On the other hand, such queries also have clauses on tariffplans, which drastically reduce the number of prefixes that need to be scanned.
Sequential scan could take 50ms on a system with 20k prefixes, say. That wouldn't be much overhead for a call establishment (which may take 1-5 sec). But, on a heavy-loaded system, everything matters.

In Postgres, there is a mechanism of providing custom operators and indexes, so that such a query would use an Index scan. After all, we know that for '+1234' we only need to match prefixes like '1' or '12..' .
In a twist, a custom operator could even use Asterisk-style patterns to support prefixes like 12[3-6]XX5.

Anyway, this is a complex issue. Fellow-devs, should I put more effort on it, or do you think we shouldn't dive into such depth? Do you have any other ideas on that?


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 3:14 pm 
Offline
Moderator
User avatar

Joined: Thu Jun 22, 2006 2:19 pm
Posts: 2890
Location: Devon, UK
This hadn't occurred to me but yes you're quite correct in saying that SELECT statement could be a real performance killer. Well spotted!

By adding a little recursion into the PHP couldn't we permit the use of indexes again? Something like (pseudocode):
Code:
$testlength = length($dialednum);
while ($testlength > 0) {
    $QUERY="SELECT WHERE dialprefix LIKE '".$substr($dialednum,$testlength)."%';";
    $results = run_query();
    $testlength -= 1;
    if ($results > 0) {
        $testlength = 0;
    }
}


Indeed after a little testing with EXPLAIN ANALYZE at the Postgres command prompt it seems making this change does allow the use of indexes, and would improve performance by two (!) orders of magnitude on my database/machine. 213ms for the original versus a total 1.2ms for the 8 failed lookups (prefix too long: 0.075 ms) and then the 9th which gave 123 matches (0.672 ms).

It would be nice if someone could run the numbers on MySQL too.


Last edited by stavros on Mon Oct 08, 2007 3:57 pm, edited 1 time in total.

Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 3:53 pm 
Offline

Joined: Thu Oct 19, 2006 9:56 am
Posts: 300
Location: Athens, Greece
Not really.
Documentation mentions that the query optimizer /might/ optimize for the left hand side of a LIKE expression.
One major downside is that it wouldn't worth it unless we wrote the custom operator in C. Said that, the operator will have to ship in _binary_ form for a2billing. Imagine the chaos at this forum!

It all comes down to the practical number of prefixes that need to be scanned. If you have, say 4 tariffplans per group, with 10k entries each, it would worth optimizing those 40k entries. If you have a total of 4k ratecards, it wouldn't.
Still, permitting regexps in prefixes (the ones that are already supported), which /will effectively reduce the number of needed ratecards/ may also speed up the query.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 4:04 pm 
Offline
Moderator
User avatar

Joined: Thu Jun 22, 2006 2:19 pm
Posts: 2890
Location: Devon, UK
I wasn't referring to implementing a new comparison operator in Postgres, just improving the current matching method's performance in a database agnostic way. I was editing my post to add some numbers to backup my idea when you replied.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 4:11 pm 
Offline

Joined: Thu Oct 19, 2006 9:56 am
Posts: 300
Location: Athens, Greece
I get your point.

But my strategy is exactly the opposite: I prefer to stick with one database, and then shift more of the processing inside it.
Think of one db server serving multiple asterisks+AGIs .. Then, optimizing the db makes sense. Minimizing the data transferred from the DB to its clients also sounds fair.
.. and these are just a few thoughts before we can plan a fundamental restructuring of the AGI script.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 5:15 pm 
Offline
Moderator
User avatar

Joined: Thu Jun 22, 2006 2:19 pm
Posts: 2890
Location: Devon, UK
Your concept of a custom comparison operator for Postgres certainly isn't without merit. If an extended prefix matching syntax were defined I agree it would be the only sensible way to minimise DB traffic. Whilst I wouldn't use it myself, I'm concerned that most of A2B's users seem to run MySQL. As such, I think this suggestion will be met with quite some resistance. I guess it would be possible to offer a fallback where the rateengine would iterate through the prefixes to find the result, but warn against using it on busy servers.

Personally I'd just go for the easy 100x speed up I previously suggested and worry about bandwidth to the DB server when gigabit ethernet becomes a limitation. But then I'm lazy. :|

My pro-active laziness also demands I ask why 'fundamental restructuring' is necessary? What shortcomings are you hoping to rectify?


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 6:03 pm 
Offline

Joined: Thu Oct 19, 2006 9:56 am
Posts: 300
Location: Athens, Greece
1. the code at the AGI already looks hairy. The more features we implement there, the harder it becomes to read and debug.
2. I agree that we can keep a MySQL SOHO version of the AGI. But then we can re-design an AGI for the bigger installations.
3. I said I'm /planning/ this whole stuff. This discussion is part of it, we're trying to find out whether it's worth a re-design. Nothing yet keeps me from implementing stuff privately.
4. One major thing I'm missing is the way a call is stored in the DB. Now, it will only insert after the call has ended, using the answertime from asterisk only. I would rather use an insert at the beginning and an update after hangup... That's a re-design already. Before that, I'd like to collect any other ideas about the AGI.
5. I also want the database itself to be able to compute dialprefixes, optimal rates etc. I don't see the db as a storage facility, but as a powerful statistics tool.

-- Edit:
To be frank, the whole idea of optimizing came after that VoipSwitch (see APNA-crap-Voip). That system takes /huge/ time from the request to the actual call progress (aka. ringing). The way we store CDRs we only measure actual talktime, not the whole procedure. I want to record all the info. Furthermore, I am afraid that one day our DB may behave like theirs, slowing our system. So, I need a clean and flexible AGI where editing won't risk breaking its logic.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 6:18 pm 
Offline
User avatar

Joined: Mon Apr 30, 2007 6:43 am
Posts: 1060
Location: Canada
xrg, I understand your concern, but when you said:

Code:
Sequential scan could take 50ms on a system with 20k prefixes, say. That wouldn't be much overhead for a call establishment (which may take 1-5 sec). But, on a heavy-loaded system, everything matters.


I guess you haven't done actual measurement to confirm those numbers. Right? It would be nice if someone could do such measurements.

My point is, for now, I will go for a faster server with lots of resources. We have QUAD XEON Servers with tons or RAM and stuff, and lightning fast. And we are planning in the future to have MySQL sitting all by itself, maybe in a cluster environment. Until the DB is optimized, the processing power will make the different. However, I totally admit that something need to be done to optimize the DB. I have an idea, but it's so wild that I don't dare talk about it. Not yet. I will think about it thoroughly and see if it is sound.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 6:55 pm 
Offline

Joined: Thu Oct 19, 2006 9:56 am
Posts: 300
Location: Athens, Greece
asiby wrote:
I guess you haven't done actual measurement to confirm those numbers. Right? It would be nice if someone could do such measurements.


I did, with 200k prefixes on an Athlon machine. That, of course, is an exaggeration of data on a relatively old system. However, I could only time one query at a time (not parallel ones).


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 08, 2007 7:01 pm 
Offline
Moderator
User avatar

Joined: Thu Jun 22, 2006 2:19 pm
Posts: 2890
Location: Devon, UK
asiby wrote:
I guess you haven't done actual measurement to confirm those numbers. Right? It would be nice if someone could do such measurements.
I already did:
sequential scan: 213ms
iterative method: 1.2ms total for 9 queries.
These both select 39 records from a set of 162,417 real rates on a Athlon64 @ 2.2GHz. There are additional knock-on benefits such as much less of the system's disk cache will be flushed/exercised.
Given this very modest database server it would appear I could scale to 100s (if not 1,000s) calls/second before latency from the database server started to increase. If I had even just 10 calls/second I'd need an Asterisk cluster capable of maintaining nearly 2,000 concurrent calls (given 3 min ALOC), which to my mind is where the real challenge lies.

Quote:
I have an idea, but it's so wild that I don't dare talk about it. Not yet. I will think about it thoroughly and see if it is sound.
Now I'm intrigued.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 09, 2007 6:23 pm 
Offline
User avatar

Joined: Mon Apr 30, 2007 6:43 am
Posts: 1060
Location: Canada
I have attached a .gif picture of a diagram that shows my "wild idea".

It's a technique that we used in the past when doing dictionary searches. In that context, each letter of the word was part of the address where the root of the word and its related data reside (definitions, synonyms, ...).

For example, if you were to look for information about the word "awesome", by the time you specify the 1st letter "a", the system has already dismissed the other letters (b to z), and at the second letter "aw", the system won't consider the branches that do not begin with "aw" and so on.

It is really effective. The system just pinpoint whatever you are looking for. If something is not found, then it is most likely because it's not there. You find what you are looking for without going through the whole database.

I know how to implement it with a custom brewed storage system (a file storage system for example), but for the moment, I have no idea how do apply that to a regular database with SQL queries. Maybe some of you will be able to find the solution. That is, if you find it useful.

In my unsuccessful attempts, I was trying to have 1 table per digit. The limitation that I had upfront was that I had to decide the maximum length of my prefixes. Then, from the 1st digit's table to the next, I had a relationship that was supposed to filter out the un-necessary following digits. This process goes on until all the digits are matched.

The idea is still fuzzy in my head, so please bear with me.

Take a look at the attached image.

Suggestions and critics are welcomed.


Attachments:
TreeSeachPattern.gif
TreeSeachPattern.gif [ 25.04 KiB | Viewed 8174 times ]
Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 09, 2007 7:08 pm 
Offline

Joined: Thu Oct 19, 2006 9:56 am
Posts: 300
Location: Athens, Greece
You are perfectly right!

The whole idea before the numbering plans for telephone was this, that each dialed digit restricts the search space of possible destinations.

I tried yesterday, till late, to write a custom operator, together with its opclass and index helper stuff. It surely is anything but trivial.

My approach, now, in managerial terms is: this case is a clear and definite example of deep db knowledge etc. I will contact some students and try to assign it to people studying db's ! That will make an interesting project for them!


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 09, 2007 7:16 pm 
Offline
Moderator
User avatar

Joined: Thu Jun 22, 2006 2:19 pm
Posts: 2890
Location: Devon, UK
You're both intelligent chaps and you have given freely to the A2B community already. The B-tree is certainly one of the best algorithms for fast searches/inserts; you both made the right choice. My question is why do you want to reimplement it? I'm sure the B-trees used in MySQL and Postgres are close to optimal. I demonstrated earlier in the thread that Postgres can make use of the existing B-Tree index on the dialprefix field with massive gains for a tiny effort on our part. The MySQL docs are fuzzy on the details, but even if it can't use the index who in their right minds would consider using MySQL to service hundreds of call setups per second in the first place?

I appreciate both you and xrg are trying to make A2B the best it can possibly be. I applaud you both for you lofty goals. :clap2:

My point is I feel there are other areas of A2B that need some serious loving before we need to worry about optimising a DB query that takes just over a millisecond. :shock:


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 09, 2007 7:53 pm 
Offline
User avatar

Joined: Mon Apr 30, 2007 6:43 am
Posts: 1060
Location: Canada
As I said earlier, for the moment, I will not worry much about the DB optimization unless it becomes very obvious. Instead, I count on the processing power of our servers to minimize the problem. But since I had started brainstorming about it, I thought that it would let others know about it. Hopefully, someone will pick it up and do something with it. I am hoping that my solution will not required the creation of any custom operator.

And Stravos, you can rest assured that we are not forgetting other areas of a2billing that require a lot of attention.

Here is one of them: The call servers. A2B can dispatch calls through many servers for load balancing purpose. For now, this seems to be custom made for the callback features only. We want to make it usable during every call that a2b will send out. In addition, when a2b send a call to a server that is no longer available, a2billing has no way of knowing that the server was not available. All it does is inserting the call requests in a queue and the servers will pick'em up. A2B dispatches the calls in a sequential manner, so during the next round, A2B will send calls to the same faulty server again, and again, and again. When I say faulty server, the server many be up and running, but the Asterisk process may have stopped working, the call daemon may not be working, the server may have caught fire, and so on.

These are some of our main concerns right now.


Top
 Profile  
 
 Post subject: Real-world performance figures
PostPosted: Wed Oct 10, 2007 3:42 am 
Offline
Moderator
User avatar

Joined: Thu Jun 22, 2006 2:19 pm
Posts: 2890
Location: Devon, UK
I've been looking deeper into this prompted by a PM from xrg who warned me not to invest the effort patching before checking that Postgres can still make use of the index when the query gets as complex as is used in A2B. The query I've been working on improving tonight is (the actual digits used have been obscured):
Code:
SELECT tariffgroupname, lcrtype, idtariffgroup, cc_tariffgroup_plan.idtariffplan, tariffname, destination,
                cc_ratecard.id, dialprefix, destination, buyrate, buyrateinitblock, buyrateincrement, rateinitial, initblock, billingblock,
                connectcharge, disconnectcharge, stepchargea, chargea, timechargea, billingblocka, stepchargeb, chargeb,
                timechargeb, billingblockb, stepchargec, chargec, timechargec, billingblockc,
                cc_tariffplan.id_trunk AS tp_id_trunk, tp_trunk.trunkprefix AS tp_trunk, tp_trunk.providertech AS tp_providertech,
                tp_trunk.providerip AS tp_providerip, tp_trunk.removeprefix AS tp_removeprefix,
                cc_ratecard.id_trunk AS rc_id_trunk, rt_trunk.trunkprefix AS rc_trunkprefix, rt_trunk.providertech AS rc_providertech,
                rt_trunk.providerip AS rc_providerip, rt_trunk.removeprefix AS rc_removeprefix,
                musiconhold,
                tp_trunk.failover_trunk AS tp_failover_trunk,
                rt_trunk.failover_trunk AS rt_failover_trunk,
                tp_trunk.addparameter AS tp_addparameter_trunk,
                rt_trunk.addparameter AS rt_addparameter_trunk,
                id_outbound_cidgroup,
                freetimetocall_package_offer, freetimetocall, packagetype, billingtype, startday, id_cc_package_offer

                FROM cc_tariffgroup
                RIGHT JOIN cc_tariffgroup_plan ON cc_tariffgroup.id=3

                INNER JOIN cc_tariffplan ON (cc_tariffplan.id=cc_tariffgroup_plan.idtariffplan )

                LEFT JOIN cc_ratecard ON cc_ratecard.idtariffplan=cc_tariffplan.id
                LEFT JOIN cc_trunk AS rt_trunk ON cc_ratecard.id_trunk=rt_trunk.id_trunk
                LEFT JOIN cc_trunk AS tp_trunk ON cc_tariffplan.id_trunk=tp_trunk.id_trunk
                LEFT JOIN cc_package_offer ON cc_package_offer.id=cc_tariffgroup.id_cc_package_offer

                WHERE (dialprefix=SUBSTRING('923XXXXXXX',1,length(dialprefix)) OR dialprefix='defaultprefix')
                AND startingdate<= CURRENT_TIMESTAMP AND (expirationdate > CURRENT_TIMESTAMP OR expirationdate IS NULL OR LENGTH(expirationdate)<5)
                AND startdate<= CURRENT_TIMESTAMP AND (stopdate > CURRENT_TIMESTAMP OR stopdate IS NULL OR LENGTH(stopdate)<5)
                 AND (starttime <= 7945 AND endtime >=7945)
                AND idtariffgroup='3'
                AND ( dnidprefix=SUBSTRING('00923XXXXXXXXX',1,length(dnidprefix)) OR (dnidprefix='all' AND 0 = (SELECT count(dnidprefix) FROM cc_tariffplan RIGHT JOIN cc_tariffgroup_plan ON cc_tariffgroup_plan.idtariffgroup=3 WHERE dnidprefix=SUBSTRING('00923XXXXXXXXX',1,length(dnidprefix)) ) ))
                AND ( calleridprefix=SUBSTRING('XXXXXXXXXX',1,length(calleridprefix)) OR (calleridprefix='all' AND 0 = (SELECT count(calleridprefix) FROM cc_tariffplan RIGHT JOIN cc_tariffgroup_plan ON cc_tariffgroup_plan.idtariffgroup=3 WHERE calleridprefix=SUBSTRING('XXXXXXXXXX',1,length(calleridprefix)) ) ))
                ORDER BY LENGTH(dialprefix) DESC
This is a real-world example gleaned from a production server.
Using ANALYZE EXPLAIN we can see what Postgres' query planner makes of it:
Code:
Sort  (cost=13495.23..13495.24 rows=4 width=406) (actual time=769.143..769.146 rows=2 loops=1)
   Sort Key: length(cc_ratecard.dialprefix)
   InitPlan
     ->  Aggregate  (cost=8.09..8.09 rows=1 width=7) (actual time=0.140..0.141 rows=1 loops=1)
           ->  Nested Loop  (cost=0.00..8.07 rows=4 width=7) (actual time=0.123..0.123 rows=0 loops=1)
                 ->  Seq Scan on cc_tariffplan  (cost=0.00..6.47 rows=1 width=7) (actual time=0.120..0.120 rows=0 loops=1)
                       Filter: (dnidprefix = "substring"('00923XXXXXXXXX'::text, 1, length(dnidprefix)))
                 ->  Seq Scan on cc_tariffgroup_plan  (cost=0.00..1.56 rows=4 width=0) (never executed)
                       Filter: (idtariffgroup = 3)
     ->  Aggregate  (cost=8.09..8.09 rows=1 width=7) (actual time=0.063..0.065 rows=1 loops=1)
           ->  Nested Loop  (cost=0.00..8.07 rows=4 width=7) (actual time=0.049..0.049 rows=0 loops=1)
                 ->  Seq Scan on cc_tariffplan  (cost=0.00..6.47 rows=1 width=7) (actual time=0.045..0.045 rows=0 loops=1)
                       Filter: (calleridprefix = "substring"('XXXXXXXXXX'::text, 1, length(calleridprefix)))
                 ->  Seq Scan on cc_tariffgroup_plan  (cost=0.00..1.56 rows=4 width=0) (never executed)
                       Filter: (idtariffgroup = 3)
   ->  Nested Loop Left Join  (cost=0.00..13479.01 rows=4 width=406) (actual time=205.005..769.115 rows=2 loops=1)
         Join Filter: ("inner".id = "outer".id_cc_package_offer)
         ->  Nested Loop Left Join  (cost=0.00..13474.19 rows=4 width=390) (actual time=204.885..768.926 rows=2 loops=1)
               ->  Nested Loop Left Join  (cost=0.00..13453.05 rows=4 width=330) (actual time=204.842..768.846 rows=2 loops=1)
                     ->  Nested Loop  (cost=0.00..13431.91 rows=4 width=270) (actual time=204.773..768.745 rows=2 loops=1)
                           Join Filter: ("inner".idtariffplan = "outer".idtariffplan)
                           ->  Nested Loop  (cost=0.00..13.76 rows=1 width=81) (actual time=0.724..3.554 rows=3 loops=1)
                                 Join Filter: ("outer".id = "inner".idtariffplan)
                                 ->  Seq Scan on cc_tariffplan  (cost=0.00..7.55 rows=1 width=39) (actual time=0.284..0.666 rows=27 loops=1)
                                       Filter: ((startingdate <= ('now'::text)::timestamp(6) with time zone) AND ((expirationdate > ('now'::text)::timestamp(6) with time zone) OR (expirationdate IS NULL) OR (length((expirationdate)::text) < 5)) AND ((dnidprefix = "substring"('00923XXXXXXXXX'::text, 1, length(dnidprefix))) OR ((dnidprefix = 'all'::text) AND (0 = $0))) AND ((calleridprefix = "substring"('XXXXXXXXXX'::text, 1, length(calleridprefix))) OR ((calleridprefix = 'all'::text) AND (0 = $1))))
                                 ->  Nested Loop Left Join  (cost=0.00..6.15 rows=4 width=42) (actual time=0.015..0.094 rows=4 loops=27)
                                       ->  Seq Scan on cc_tariffgroup_plan  (cost=0.00..1.56 rows=4 width=8) (actual time=0.005..0.026 rows=4 loops=27)
                                             Filter: (idtariffgroup = 3)
                                       ->  Seq Scan on cc_tariffgroup  (cost=0.00..1.14 rows=1 width=34) (actual time=0.004..0.008 rows=1 loops=108)
                                             Filter: (id = 3)
                           ->  Seq Scan on cc_ratecard  (cost=0.00..13407.81 rows=828 width=197) (actual time=13.120..254.958 rows=26 loops=3)
                                 Filter: (((dialprefix = "substring"('923XXXXXXXXX'::text, 1, length(dialprefix))) OR (dialprefix = 'defaultprefix'::text)) AND (startdate <= ('now'::text)::timestamp(6) with time zone) AND ((stopdate > ('now'::text)::timestamp(6) with time zone) OR (stopdate IS NULL) OR (length((stopdate)::text) < 5)) AND (starttime <= 7945) AND (endtime >= 7945))
                     ->  Index Scan using cc_trunk_pkey on cc_trunk rt_trunk  (cost=0.00..5.27 rows=1 width=64) (actual time=0.017..0.019 rows=1 loops=2)
                           Index Cond: ("outer".id_trunk = rt_trunk.id_trunk)
               ->  Index Scan using cc_trunk_pkey on cc_trunk tp_trunk  (cost=0.00..5.27 rows=1 width=64) (actual time=0.004..0.006 rows=1 loops=2)
                     Index Cond: ("outer".id_trunk = tp_trunk.id_trunk)
         ->  Seq Scan on cc_package_offer  (cost=0.00..1.09 rows=9 width=24) (actual time=0.011..0.028 rows=9 loops=2)
Total runtime: 769.675 ms
Ouch! 0.77 seconds to look up a rate, and contrary to my expectations it also returned two matches.
I tested this 'feature' again by looking up '441392XXXXXX' and got back a rate for 441392 as expected, but additionally I got a rate for 441 and then my fallback of 44 alone.
Code:
  tariffgroupname   | lcrtype | idtariffgroup | idtariffplan |        tariffname        |  destination  |  id   | dialprefix |  destination  | buyrate | buyrateinitblock
--------------------+---------+---------------+--------------+--------------------------+---------------+-------+------------+---------------+---------+------------------+
Pay As You Go - EU |       0 |             3 |           17 | A-Z Rates - EU Customers | UK, exeter    | 85196 | 441392     | UK, exeter    |    0.XX |                1 |
Pay As You Go - EU |       0 |             3 |           17 | A-Z Rates - EU Customers | UK geographic | 84986 | 441        | UK geographic |    0.XX |                1 |
Pay As You Go - EU |       0 |             3 |           17 | A-Z Rates - EU Customers | UK other      | 79694 | 44         | UK other      |    X.XX |                1 |
Ouch once more! I'm guessing here, but I think this might interact very badly with failover. Surely this query is only meant to be returning the longest match(es)?

So anyway, ignoring what I believed to a bug for a moment, I carried on and tried unrolling the query to remove that nasty sequential scan on cc_ratecard. The first query I run changes the WHERE clause to start with "WHERE (dialprefix LIKE '923XXXXXXXXX%'". Successive queries shorten this clause by one character until a match is found for the rate 923. EXPLAIN ANALYZE now says:
Code:
Sort  (cost=333.10..333.11 rows=1 width=406) (actual time=3.371..3.373 rows=1 loops=1)
   Sort Key: length(cc_ratecard.dialprefix)
   InitPlan
     ->  Aggregate  (cost=8.09..8.09 rows=1 width=7) (actual time=0.063..0.065 rows=1 loops=1)
           ->  Nested Loop  (cost=0.00..8.07 rows=4 width=7) (actual time=0.056..0.056 rows=0 loops=1)
                 ->  Seq Scan on cc_tariffplan  (cost=0.00..6.47 rows=1 width=7) (actual time=0.053..0.053 rows=0 loops=1)
                       Filter: (dnidprefix = "substring"('00923XXXXXXXXX'::text, 1, length(dnidprefix)))
                 ->  Seq Scan on cc_tariffgroup_plan  (cost=0.00..1.56 rows=4 width=0) (never executed)
                       Filter: (idtariffgroup = 3)
     ->  Aggregate  (cost=8.09..8.09 rows=1 width=7) (actual time=0.048..0.050 rows=1 loops=1)
           ->  Nested Loop  (cost=0.00..8.07 rows=4 width=7) (actual time=0.043..0.043 rows=0 loops=1)
                 ->  Seq Scan on cc_tariffplan  (cost=0.00..6.47 rows=1 width=7) (actual time=0.040..0.040 rows=0 loops=1)
                       Filter: (calleridprefix = "substring"('XXXXXXXXXX'::text, 1, length(calleridprefix)))
                 ->  Seq Scan on cc_tariffgroup_plan  (cost=0.00..1.56 rows=4 width=0) (never executed)
                       Filter: (idtariffgroup = 3)
   ->  Nested Loop Left Join  (cost=1.14..316.92 rows=1 width=406) (actual time=0.802..3.359 rows=1 loops=1)
         Join Filter: ("inner".id = "outer".id_cc_package_offer)
         ->  Nested Loop Left Join  (cost=1.14..315.71 rows=1 width=390) (actual time=0.728..3.282 rows=1 loops=1)
               ->  Nested Loop Left Join  (cost=1.14..310.43 rows=1 width=330) (actual time=0.701..3.243 rows=1 loops=1)
                     ->  Nested Loop  (cost=1.14..305.14 rows=1 width=270) (actual time=0.674..3.210 rows=1 loops=1)
                           Join Filter: ("inner".idtariffplan = "outer".idtariffplan)
                           ->  Nested Loop  (cost=1.14..10.38 rows=1 width=81) (actual time=0.523..2.684 rows=3 loops=1)
                                 Join Filter: ("outer".id = "inner".idtariffplan)
                                 ->  Seq Scan on cc_tariffplan  (cost=0.00..7.55 rows=1 width=39) (actual time=0.160..0.418 rows=27 loops=1)
                                       Filter: ((startingdate <= ('now'::text)::timestamp(6) with time zone) AND ((expirationdate > ('now'::text)::timestamp(6) with time zone) OR (expirationdate IS NULL) OR (length((expirationdate)::text) < 5)) AND ((dnidprefix = "substring"('00923XXXXXXXXX'::text, 1, length(dnidprefix))) OR ((dnidprefix = 'all'::text) AND (0 = $0))) AND ((calleridprefix = "substring"('XXXXXXXXXX'::text, 1, length(calleridprefix))) OR ((calleridprefix = 'all'::text) AND (0 = $1))))
                                 ->  Nested Loop Left Join  (cost=1.14..2.78 rows=4 width=42) (actual time=0.013..0.072 rows=4 loops=27)
                                       ->  Seq Scan on cc_tariffgroup_plan  (cost=0.00..1.56 rows=4 width=8) (actual time=0.005..0.026 rows=4 loops=27)
                                             Filter: (idtariffgroup = 3)
                                       ->  Materialize  (cost=1.14..1.15 rows=1 width=34) (actual time=0.002..0.003 rows=1 loops=108)
                                             ->  Seq Scan on cc_tariffgroup  (cost=0.00..1.14 rows=1 width=34) (actual time=0.007..0.013 rows=1 loops=1)
                                                   Filter: (id = 3)
                           ->  Index Scan using ind_cc_ratecard_dialprefix, ind_cc_ratecard_dialprefix on cc_ratecard  (cost=0.00..293.87 rows=71 width=197) (actual time=0.021..0.138 rows=13 loops=3)
                                 Index Cond: (((dialprefix >= '923'::text) AND (dialprefix < '924'::text)) OR (dialprefix = 'defaultprefix'::text))
                                 Filter: (((dialprefix ~~ '923%'::text) OR (dialprefix = 'defaultprefix'::text)) AND (startdate <= ('now'::text)::timestamp(6) with time zone) AND ((stopdate > ('now'::text)::timestamp(6) with time zone) OR (stopdate IS NULL) OR (length((stopdate)::text) < 5)) AND (starttime <= 7945) AND (endtime >= 7945))
                     ->  Index Scan using cc_trunk_pkey on cc_trunk rt_trunk  (cost=0.00..5.27 rows=1 width=64) (actual time=0.008..0.011 rows=1 loops=1)
                           Index Cond: ("outer".id_trunk = rt_trunk.id_trunk)
               ->  Index Scan using cc_trunk_pkey on cc_trunk tp_trunk  (cost=0.00..5.27 rows=1 width=64) (actual time=0.003..0.005 rows=1 loops=1)
                     Index Cond: ("outer".id_trunk = tp_trunk.id_trunk)
         ->  Seq Scan on cc_package_offer  (cost=0.00..1.09 rows=9 width=24) (actual time=0.004..0.022 rows=9 loops=1)
Total runtime: 3.675 ms
Much more reasonable. These queries took a total of (9•1.12ms+3.67ms)==13.75ms, over 50x quicker than when we started. :up:

As an added bonus this optimised query only spits out the longest matching rate(s), which is what I thought the original was meant to do. Tomorrow I think I might start looking at how much work would be involved in modifying the rate engine to unroll the query in this manner, unless some PHP wizard would like to complete the task for me.
As ever, all input/criticism is eagerly received. :tomato:


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 28 posts ]  Go to page 1, 2  Next
Voice Broadcast System


All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group